00:00:00.000 Started by upstream project "autotest-nightly" build number 4353 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3716 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.038 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.041 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.059 Fetching changes from the remote Git repository 00:00:00.060 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.091 Using shallow fetch with depth 1 00:00:00.091 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.091 > git --version # timeout=10 00:00:00.145 > git --version # 'git version 2.39.2' 00:00:00.145 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.184 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.184 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.052 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.065 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.077 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.078 > git config core.sparsecheckout # timeout=10 00:00:07.092 > git read-tree -mu HEAD # timeout=10 00:00:07.109 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.127 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.128 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.222 [Pipeline] Start of Pipeline 00:00:07.235 [Pipeline] library 00:00:07.236 Loading library shm_lib@master 00:00:07.236 Library shm_lib@master is cached. Copying from home. 00:00:07.248 [Pipeline] node 00:00:07.259 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest 00:00:07.260 [Pipeline] { 00:00:07.267 [Pipeline] catchError 00:00:07.268 [Pipeline] { 00:00:07.279 [Pipeline] wrap 00:00:07.287 [Pipeline] { 00:00:07.295 [Pipeline] stage 00:00:07.297 [Pipeline] { (Prologue) 00:00:07.313 [Pipeline] echo 00:00:07.314 Node: VM-host-WFP7 00:00:07.319 [Pipeline] cleanWs 00:00:07.328 [WS-CLEANUP] Deleting project workspace... 00:00:07.328 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.335 [WS-CLEANUP] done 00:00:07.569 [Pipeline] setCustomBuildProperty 00:00:07.640 [Pipeline] httpRequest 00:00:08.056 [Pipeline] echo 00:00:08.057 Sorcerer 10.211.164.20 is alive 00:00:08.063 [Pipeline] retry 00:00:08.064 [Pipeline] { 00:00:08.076 [Pipeline] httpRequest 00:00:08.080 HttpMethod: GET 00:00:08.081 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.081 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.098 Response Code: HTTP/1.1 200 OK 00:00:08.098 Success: Status code 200 is in the accepted range: 200,404 00:00:08.098 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.560 [Pipeline] } 00:00:09.572 [Pipeline] // retry 00:00:09.578 [Pipeline] sh 00:00:09.860 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.876 [Pipeline] httpRequest 00:00:10.235 [Pipeline] echo 00:00:10.237 Sorcerer 10.211.164.20 is alive 00:00:10.246 [Pipeline] retry 00:00:10.248 [Pipeline] { 00:00:10.264 [Pipeline] httpRequest 00:00:10.269 HttpMethod: GET 00:00:10.270 URL: http://10.211.164.20/packages/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:00:10.271 Sending request to url: http://10.211.164.20/packages/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:00:10.292 Response Code: HTTP/1.1 200 OK 00:00:10.293 Success: Status code 200 is in the accepted range: 200,404 00:00:10.293 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:01:14.489 [Pipeline] } 00:01:14.506 [Pipeline] // retry 00:01:14.513 [Pipeline] sh 00:01:14.797 + tar --no-same-owner -xf spdk_d58eef2a29f5d65b15a72162d9d79db68f27aa81.tar.gz 00:01:17.347 [Pipeline] sh 00:01:17.631 + git -C spdk log --oneline -n5 00:01:17.631 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:17.631 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:17.631 66289a6db build: use VERSION file for storing version 00:01:17.631 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:17.631 cec5ba284 nvme/rdma: Register UMR per IO request 00:01:17.649 [Pipeline] writeFile 00:01:17.663 [Pipeline] sh 00:01:17.954 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:17.966 [Pipeline] sh 00:01:18.248 + cat autorun-spdk.conf 00:01:18.248 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.248 SPDK_RUN_ASAN=1 00:01:18.248 SPDK_RUN_UBSAN=1 00:01:18.248 SPDK_TEST_RAID=1 00:01:18.248 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.256 RUN_NIGHTLY=1 00:01:18.258 [Pipeline] } 00:01:18.271 [Pipeline] // stage 00:01:18.285 [Pipeline] stage 00:01:18.287 [Pipeline] { (Run VM) 00:01:18.299 [Pipeline] sh 00:01:18.583 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:18.583 + echo 'Start stage prepare_nvme.sh' 00:01:18.583 Start stage prepare_nvme.sh 00:01:18.583 + [[ -n 1 ]] 00:01:18.583 + disk_prefix=ex1 00:01:18.583 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:01:18.583 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:01:18.583 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:01:18.583 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:18.583 ++ SPDK_RUN_ASAN=1 00:01:18.583 ++ SPDK_RUN_UBSAN=1 00:01:18.583 ++ SPDK_TEST_RAID=1 00:01:18.583 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:18.583 ++ RUN_NIGHTLY=1 00:01:18.583 + cd /var/jenkins/workspace/raid-vg-autotest 00:01:18.583 + nvme_files=() 00:01:18.583 + declare -A nvme_files 00:01:18.583 + backend_dir=/var/lib/libvirt/images/backends 00:01:18.583 + nvme_files['nvme.img']=5G 00:01:18.583 + nvme_files['nvme-cmb.img']=5G 00:01:18.583 + nvme_files['nvme-multi0.img']=4G 00:01:18.583 + nvme_files['nvme-multi1.img']=4G 00:01:18.583 + nvme_files['nvme-multi2.img']=4G 00:01:18.583 + nvme_files['nvme-openstack.img']=8G 00:01:18.583 + nvme_files['nvme-zns.img']=5G 00:01:18.583 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:18.583 + (( SPDK_TEST_FTL == 1 )) 00:01:18.583 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:18.583 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:18.583 + for nvme in "${!nvme_files[@]}" 00:01:18.583 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:18.583 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.583 + for nvme in "${!nvme_files[@]}" 00:01:18.583 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:18.583 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.583 + for nvme in "${!nvme_files[@]}" 00:01:18.583 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:18.583 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:18.583 + for nvme in "${!nvme_files[@]}" 00:01:18.583 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:18.583 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:18.584 + for nvme in "${!nvme_files[@]}" 00:01:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:18.584 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.584 + for nvme in "${!nvme_files[@]}" 00:01:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:18.584 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:18.584 + for nvme in "${!nvme_files[@]}" 00:01:18.584 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:19.523 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:19.523 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:19.523 + echo 'End stage prepare_nvme.sh' 00:01:19.523 End stage prepare_nvme.sh 00:01:19.534 [Pipeline] sh 00:01:19.844 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:19.844 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -H -a -v -f fedora39 00:01:19.844 00:01:19.844 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:01:19.844 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:01:19.844 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:01:19.844 HELP=0 00:01:19.844 DRY_RUN=0 00:01:19.844 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img, 00:01:19.844 NVME_DISKS_TYPE=nvme,nvme, 00:01:19.844 NVME_AUTO_CREATE=0 00:01:19.844 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img, 00:01:19.844 NVME_CMB=,, 00:01:19.844 NVME_PMR=,, 00:01:19.844 NVME_ZNS=,, 00:01:19.844 NVME_MS=,, 00:01:19.844 NVME_FDP=,, 00:01:19.844 SPDK_VAGRANT_DISTRO=fedora39 00:01:19.844 SPDK_VAGRANT_VMCPU=10 00:01:19.844 SPDK_VAGRANT_VMRAM=12288 00:01:19.844 SPDK_VAGRANT_PROVIDER=libvirt 00:01:19.844 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:19.844 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:19.844 SPDK_OPENSTACK_NETWORK=0 00:01:19.844 VAGRANT_PACKAGE_BOX=0 00:01:19.844 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:19.844 FORCE_DISTRO=true 00:01:19.844 VAGRANT_BOX_VERSION= 00:01:19.844 EXTRA_VAGRANTFILES= 00:01:19.844 NIC_MODEL=virtio 00:01:19.844 00:01:19.844 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:01:19.844 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:01:22.402 Bringing machine 'default' up with 'libvirt' provider... 00:01:22.402 ==> default: Creating image (snapshot of base box volume). 00:01:22.662 ==> default: Creating domain with the following settings... 00:01:22.662 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1733981969_5873df51c2bde2f3db9d 00:01:22.662 ==> default: -- Domain type: kvm 00:01:22.662 ==> default: -- Cpus: 10 00:01:22.662 ==> default: -- Feature: acpi 00:01:22.662 ==> default: -- Feature: apic 00:01:22.662 ==> default: -- Feature: pae 00:01:22.662 ==> default: -- Memory: 12288M 00:01:22.662 ==> default: -- Memory Backing: hugepages: 00:01:22.662 ==> default: -- Management MAC: 00:01:22.662 ==> default: -- Loader: 00:01:22.662 ==> default: -- Nvram: 00:01:22.662 ==> default: -- Base box: spdk/fedora39 00:01:22.662 ==> default: -- Storage pool: default 00:01:22.662 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1733981969_5873df51c2bde2f3db9d.img (20G) 00:01:22.662 ==> default: -- Volume Cache: default 00:01:22.662 ==> default: -- Kernel: 00:01:22.662 ==> default: -- Initrd: 00:01:22.662 ==> default: -- Graphics Type: vnc 00:01:22.662 ==> default: -- Graphics Port: -1 00:01:22.662 ==> default: -- Graphics IP: 127.0.0.1 00:01:22.662 ==> default: -- Graphics Password: Not defined 00:01:22.662 ==> default: -- Video Type: cirrus 00:01:22.662 ==> default: -- Video VRAM: 9216 00:01:22.662 ==> default: -- Sound Type: 00:01:22.662 ==> default: -- Keymap: en-us 00:01:22.662 ==> default: -- TPM Path: 00:01:22.662 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:22.662 ==> default: -- Command line args: 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:22.662 ==> default: -> value=-drive, 00:01:22.662 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:22.662 ==> default: -> value=-drive, 00:01:22.662 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.662 ==> default: -> value=-drive, 00:01:22.662 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.662 ==> default: -> value=-drive, 00:01:22.662 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:22.662 ==> default: -> value=-device, 00:01:22.662 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:22.662 ==> default: Creating shared folders metadata... 00:01:22.662 ==> default: Starting domain. 00:01:24.043 ==> default: Waiting for domain to get an IP address... 00:01:42.147 ==> default: Waiting for SSH to become available... 00:01:42.147 ==> default: Configuring and enabling network interfaces... 00:01:47.430 default: SSH address: 192.168.121.194:22 00:01:47.430 default: SSH username: vagrant 00:01:47.430 default: SSH auth method: private key 00:01:49.970 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:58.102 ==> default: Mounting SSHFS shared folder... 00:02:00.015 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:00.015 ==> default: Checking Mount.. 00:02:01.922 ==> default: Folder Successfully Mounted! 00:02:01.922 ==> default: Running provisioner: file... 00:02:02.860 default: ~/.gitconfig => .gitconfig 00:02:03.430 00:02:03.430 SUCCESS! 00:02:03.430 00:02:03.430 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:03.430 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:03.430 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:03.430 00:02:03.439 [Pipeline] } 00:02:03.454 [Pipeline] // stage 00:02:03.463 [Pipeline] dir 00:02:03.464 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:03.466 [Pipeline] { 00:02:03.478 [Pipeline] catchError 00:02:03.480 [Pipeline] { 00:02:03.492 [Pipeline] sh 00:02:03.776 + vagrant ssh-config --host vagrant 00:02:03.776 + sed -ne /^Host/,$p 00:02:03.776 + tee ssh_conf 00:02:06.312 Host vagrant 00:02:06.312 HostName 192.168.121.194 00:02:06.312 User vagrant 00:02:06.312 Port 22 00:02:06.312 UserKnownHostsFile /dev/null 00:02:06.312 StrictHostKeyChecking no 00:02:06.312 PasswordAuthentication no 00:02:06.312 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:06.312 IdentitiesOnly yes 00:02:06.312 LogLevel FATAL 00:02:06.312 ForwardAgent yes 00:02:06.313 ForwardX11 yes 00:02:06.313 00:02:06.325 [Pipeline] withEnv 00:02:06.328 [Pipeline] { 00:02:06.340 [Pipeline] sh 00:02:06.622 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:06.622 source /etc/os-release 00:02:06.622 [[ -e /image.version ]] && img=$(< /image.version) 00:02:06.622 # Minimal, systemd-like check. 00:02:06.622 if [[ -e /.dockerenv ]]; then 00:02:06.622 # Clear garbage from the node's name: 00:02:06.622 # agt-er_autotest_547-896 -> autotest_547-896 00:02:06.622 # $HOSTNAME is the actual container id 00:02:06.622 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:06.622 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:06.622 # We can assume this is a mount from a host where container is running, 00:02:06.622 # so fetch its hostname to easily identify the target swarm worker. 00:02:06.622 container="$(< /etc/hostname) ($agent)" 00:02:06.622 else 00:02:06.622 # Fallback 00:02:06.622 container=$agent 00:02:06.622 fi 00:02:06.622 fi 00:02:06.622 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:06.622 00:02:06.893 [Pipeline] } 00:02:06.908 [Pipeline] // withEnv 00:02:06.915 [Pipeline] setCustomBuildProperty 00:02:06.929 [Pipeline] stage 00:02:06.932 [Pipeline] { (Tests) 00:02:06.948 [Pipeline] sh 00:02:07.228 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:07.501 [Pipeline] sh 00:02:07.782 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:08.055 [Pipeline] timeout 00:02:08.055 Timeout set to expire in 1 hr 30 min 00:02:08.057 [Pipeline] { 00:02:08.071 [Pipeline] sh 00:02:08.354 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:08.922 HEAD is now at d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:08.934 [Pipeline] sh 00:02:09.225 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:09.497 [Pipeline] sh 00:02:09.779 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:10.054 [Pipeline] sh 00:02:10.338 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:10.597 ++ readlink -f spdk_repo 00:02:10.597 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:10.597 + [[ -n /home/vagrant/spdk_repo ]] 00:02:10.597 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:10.597 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:10.597 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:10.597 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:10.597 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:10.597 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:10.597 + cd /home/vagrant/spdk_repo 00:02:10.597 + source /etc/os-release 00:02:10.597 ++ NAME='Fedora Linux' 00:02:10.597 ++ VERSION='39 (Cloud Edition)' 00:02:10.597 ++ ID=fedora 00:02:10.597 ++ VERSION_ID=39 00:02:10.597 ++ VERSION_CODENAME= 00:02:10.597 ++ PLATFORM_ID=platform:f39 00:02:10.597 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:10.597 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:10.597 ++ LOGO=fedora-logo-icon 00:02:10.597 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:10.597 ++ HOME_URL=https://fedoraproject.org/ 00:02:10.597 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:10.597 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:10.597 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:10.597 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:10.597 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:10.597 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:10.597 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:10.597 ++ SUPPORT_END=2024-11-12 00:02:10.597 ++ VARIANT='Cloud Edition' 00:02:10.597 ++ VARIANT_ID=cloud 00:02:10.597 + uname -a 00:02:10.597 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:10.597 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:11.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:11.167 Hugepages 00:02:11.167 node hugesize free / total 00:02:11.167 node0 1048576kB 0 / 0 00:02:11.167 node0 2048kB 0 / 0 00:02:11.167 00:02:11.167 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:11.167 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:11.167 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:11.167 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:11.167 + rm -f /tmp/spdk-ld-path 00:02:11.167 + source autorun-spdk.conf 00:02:11.167 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.167 ++ SPDK_RUN_ASAN=1 00:02:11.167 ++ SPDK_RUN_UBSAN=1 00:02:11.167 ++ SPDK_TEST_RAID=1 00:02:11.167 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.167 ++ RUN_NIGHTLY=1 00:02:11.167 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:11.167 + [[ -n '' ]] 00:02:11.167 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:11.167 + for M in /var/spdk/build-*-manifest.txt 00:02:11.167 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:11.167 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.167 + for M in /var/spdk/build-*-manifest.txt 00:02:11.167 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:11.167 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.167 + for M in /var/spdk/build-*-manifest.txt 00:02:11.167 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:11.167 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:11.167 ++ uname 00:02:11.167 + [[ Linux == \L\i\n\u\x ]] 00:02:11.167 + sudo dmesg -T 00:02:11.427 + sudo dmesg --clear 00:02:11.427 + dmesg_pid=5413 00:02:11.427 + sudo dmesg -Tw 00:02:11.427 + [[ Fedora Linux == FreeBSD ]] 00:02:11.427 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.427 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:11.427 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:11.427 + [[ -x /usr/src/fio-static/fio ]] 00:02:11.427 + export FIO_BIN=/usr/src/fio-static/fio 00:02:11.427 + FIO_BIN=/usr/src/fio-static/fio 00:02:11.427 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:11.427 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:11.427 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:11.427 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.427 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:11.427 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:11.427 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.427 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:11.427 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.427 05:40:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.427 05:40:18 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:11.427 05:40:18 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=1 00:02:11.427 05:40:18 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:11.427 05:40:18 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:11.427 05:40:18 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:11.427 05:40:18 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:11.427 05:40:18 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:11.427 05:40:18 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:11.427 05:40:18 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.427 05:40:18 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.427 05:40:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.427 05:40:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.427 05:40:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.427 05:40:18 -- paths/export.sh@5 -- $ export PATH 00:02:11.427 05:40:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.427 05:40:18 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:11.688 05:40:18 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:11.688 05:40:18 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733982018.XXXXXX 00:02:11.688 05:40:18 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733982018.ubjIDj 00:02:11.688 05:40:18 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:11.688 05:40:18 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:11.688 05:40:18 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:11.688 05:40:18 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:11.688 05:40:18 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:11.688 05:40:18 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:11.688 05:40:18 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:11.688 05:40:18 -- common/autotest_common.sh@10 -- $ set +x 00:02:11.688 05:40:18 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:11.688 05:40:18 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:11.688 05:40:18 -- pm/common@17 -- $ local monitor 00:02:11.688 05:40:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.688 05:40:18 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.688 05:40:18 -- pm/common@25 -- $ sleep 1 00:02:11.688 05:40:18 -- pm/common@21 -- $ date +%s 00:02:11.688 05:40:18 -- pm/common@21 -- $ date +%s 00:02:11.688 05:40:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733982018 00:02:11.688 05:40:18 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733982018 00:02:11.688 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733982018_collect-cpu-load.pm.log 00:02:11.688 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733982018_collect-vmstat.pm.log 00:02:12.628 05:40:19 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:12.628 05:40:19 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.628 05:40:19 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.628 05:40:19 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.628 05:40:19 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.628 Thu Dec 12 05:40:19 AM UTC 2024 00:02:12.628 05:40:19 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.628 v25.01-rc1-1-gd58eef2a2 00:02:12.628 05:40:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:12.628 05:40:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:12.628 05:40:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.628 05:40:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.628 05:40:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.628 ************************************ 00:02:12.628 START TEST asan 00:02:12.628 ************************************ 00:02:12.628 using asan 00:02:12.628 05:40:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:12.628 00:02:12.628 real 0m0.000s 00:02:12.628 user 0m0.000s 00:02:12.628 sys 0m0.000s 00:02:12.628 05:40:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:12.628 05:40:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.628 ************************************ 00:02:12.628 END TEST asan 00:02:12.628 ************************************ 00:02:12.628 05:40:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.628 05:40:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.628 05:40:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:12.628 05:40:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:12.628 05:40:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.628 ************************************ 00:02:12.628 START TEST ubsan 00:02:12.628 ************************************ 00:02:12.628 using ubsan 00:02:12.628 05:40:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:12.628 00:02:12.628 real 0m0.001s 00:02:12.628 user 0m0.000s 00:02:12.628 sys 0m0.001s 00:02:12.628 05:40:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:12.628 05:40:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:12.628 ************************************ 00:02:12.628 END TEST ubsan 00:02:12.628 ************************************ 00:02:12.628 05:40:20 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:12.628 05:40:20 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.628 05:40:20 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:12.628 05:40:20 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:12.888 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:12.888 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.458 Using 'verbs' RDMA provider 00:02:29.294 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:47.399 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:47.399 Creating mk/config.mk...done. 00:02:47.399 Creating mk/cc.flags.mk...done. 00:02:47.399 Type 'make' to build. 00:02:47.399 05:40:52 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:47.399 05:40:52 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:47.399 05:40:52 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:47.399 05:40:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:47.399 ************************************ 00:02:47.399 START TEST make 00:02:47.399 ************************************ 00:02:47.399 05:40:52 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:57.386 The Meson build system 00:02:57.386 Version: 1.5.0 00:02:57.386 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:57.386 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:57.386 Build type: native build 00:02:57.386 Program cat found: YES (/usr/bin/cat) 00:02:57.386 Project name: DPDK 00:02:57.386 Project version: 24.03.0 00:02:57.386 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:57.386 C linker for the host machine: cc ld.bfd 2.40-14 00:02:57.386 Host machine cpu family: x86_64 00:02:57.386 Host machine cpu: x86_64 00:02:57.386 Message: ## Building in Developer Mode ## 00:02:57.386 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:57.386 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:57.386 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:57.386 Program python3 found: YES (/usr/bin/python3) 00:02:57.386 Program cat found: YES (/usr/bin/cat) 00:02:57.386 Compiler for C supports arguments -march=native: YES 00:02:57.386 Checking for size of "void *" : 8 00:02:57.386 Checking for size of "void *" : 8 (cached) 00:02:57.386 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:57.386 Library m found: YES 00:02:57.386 Library numa found: YES 00:02:57.386 Has header "numaif.h" : YES 00:02:57.386 Library fdt found: NO 00:02:57.386 Library execinfo found: NO 00:02:57.386 Has header "execinfo.h" : YES 00:02:57.386 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:57.386 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:57.386 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:57.386 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:57.386 Run-time dependency openssl found: YES 3.1.1 00:02:57.386 Run-time dependency libpcap found: YES 1.10.4 00:02:57.386 Has header "pcap.h" with dependency libpcap: YES 00:02:57.386 Compiler for C supports arguments -Wcast-qual: YES 00:02:57.386 Compiler for C supports arguments -Wdeprecated: YES 00:02:57.386 Compiler for C supports arguments -Wformat: YES 00:02:57.386 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:57.386 Compiler for C supports arguments -Wformat-security: NO 00:02:57.386 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:57.386 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:57.386 Compiler for C supports arguments -Wnested-externs: YES 00:02:57.386 Compiler for C supports arguments -Wold-style-definition: YES 00:02:57.386 Compiler for C supports arguments -Wpointer-arith: YES 00:02:57.386 Compiler for C supports arguments -Wsign-compare: YES 00:02:57.386 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:57.386 Compiler for C supports arguments -Wundef: YES 00:02:57.386 Compiler for C supports arguments -Wwrite-strings: YES 00:02:57.386 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:57.386 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:57.386 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:57.386 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:57.387 Program objdump found: YES (/usr/bin/objdump) 00:02:57.387 Compiler for C supports arguments -mavx512f: YES 00:02:57.387 Checking if "AVX512 checking" compiles: YES 00:02:57.387 Fetching value of define "__SSE4_2__" : 1 00:02:57.387 Fetching value of define "__AES__" : 1 00:02:57.387 Fetching value of define "__AVX__" : 1 00:02:57.387 Fetching value of define "__AVX2__" : 1 00:02:57.387 Fetching value of define "__AVX512BW__" : 1 00:02:57.387 Fetching value of define "__AVX512CD__" : 1 00:02:57.387 Fetching value of define "__AVX512DQ__" : 1 00:02:57.387 Fetching value of define "__AVX512F__" : 1 00:02:57.387 Fetching value of define "__AVX512VL__" : 1 00:02:57.387 Fetching value of define "__PCLMUL__" : 1 00:02:57.387 Fetching value of define "__RDRND__" : 1 00:02:57.387 Fetching value of define "__RDSEED__" : 1 00:02:57.387 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:57.387 Fetching value of define "__znver1__" : (undefined) 00:02:57.387 Fetching value of define "__znver2__" : (undefined) 00:02:57.387 Fetching value of define "__znver3__" : (undefined) 00:02:57.387 Fetching value of define "__znver4__" : (undefined) 00:02:57.387 Library asan found: YES 00:02:57.387 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:57.387 Message: lib/log: Defining dependency "log" 00:02:57.387 Message: lib/kvargs: Defining dependency "kvargs" 00:02:57.387 Message: lib/telemetry: Defining dependency "telemetry" 00:02:57.387 Library rt found: YES 00:02:57.387 Checking for function "getentropy" : NO 00:02:57.387 Message: lib/eal: Defining dependency "eal" 00:02:57.387 Message: lib/ring: Defining dependency "ring" 00:02:57.387 Message: lib/rcu: Defining dependency "rcu" 00:02:57.387 Message: lib/mempool: Defining dependency "mempool" 00:02:57.387 Message: lib/mbuf: Defining dependency "mbuf" 00:02:57.387 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:57.387 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:57.387 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:57.387 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:57.387 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:57.387 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:57.387 Compiler for C supports arguments -mpclmul: YES 00:02:57.387 Compiler for C supports arguments -maes: YES 00:02:57.387 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:57.387 Compiler for C supports arguments -mavx512bw: YES 00:02:57.387 Compiler for C supports arguments -mavx512dq: YES 00:02:57.387 Compiler for C supports arguments -mavx512vl: YES 00:02:57.387 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:57.387 Compiler for C supports arguments -mavx2: YES 00:02:57.387 Compiler for C supports arguments -mavx: YES 00:02:57.387 Message: lib/net: Defining dependency "net" 00:02:57.387 Message: lib/meter: Defining dependency "meter" 00:02:57.387 Message: lib/ethdev: Defining dependency "ethdev" 00:02:57.387 Message: lib/pci: Defining dependency "pci" 00:02:57.387 Message: lib/cmdline: Defining dependency "cmdline" 00:02:57.387 Message: lib/hash: Defining dependency "hash" 00:02:57.387 Message: lib/timer: Defining dependency "timer" 00:02:57.387 Message: lib/compressdev: Defining dependency "compressdev" 00:02:57.387 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:57.387 Message: lib/dmadev: Defining dependency "dmadev" 00:02:57.387 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:57.387 Message: lib/power: Defining dependency "power" 00:02:57.387 Message: lib/reorder: Defining dependency "reorder" 00:02:57.387 Message: lib/security: Defining dependency "security" 00:02:57.387 Has header "linux/userfaultfd.h" : YES 00:02:57.387 Has header "linux/vduse.h" : YES 00:02:57.387 Message: lib/vhost: Defining dependency "vhost" 00:02:57.387 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:57.387 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:57.387 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:57.387 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:57.387 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:57.387 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:57.387 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:57.387 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:57.387 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:57.387 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:57.387 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:57.387 Configuring doxy-api-html.conf using configuration 00:02:57.387 Configuring doxy-api-man.conf using configuration 00:02:57.387 Program mandb found: YES (/usr/bin/mandb) 00:02:57.387 Program sphinx-build found: NO 00:02:57.387 Configuring rte_build_config.h using configuration 00:02:57.387 Message: 00:02:57.387 ================= 00:02:57.387 Applications Enabled 00:02:57.387 ================= 00:02:57.387 00:02:57.387 apps: 00:02:57.387 00:02:57.387 00:02:57.387 Message: 00:02:57.387 ================= 00:02:57.387 Libraries Enabled 00:02:57.387 ================= 00:02:57.387 00:02:57.387 libs: 00:02:57.387 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:57.387 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:57.387 cryptodev, dmadev, power, reorder, security, vhost, 00:02:57.387 00:02:57.387 Message: 00:02:57.387 =============== 00:02:57.387 Drivers Enabled 00:02:57.387 =============== 00:02:57.387 00:02:57.387 common: 00:02:57.387 00:02:57.387 bus: 00:02:57.387 pci, vdev, 00:02:57.387 mempool: 00:02:57.387 ring, 00:02:57.387 dma: 00:02:57.387 00:02:57.387 net: 00:02:57.387 00:02:57.387 crypto: 00:02:57.387 00:02:57.387 compress: 00:02:57.387 00:02:57.387 vdpa: 00:02:57.387 00:02:57.387 00:02:57.387 Message: 00:02:57.387 ================= 00:02:57.387 Content Skipped 00:02:57.387 ================= 00:02:57.387 00:02:57.387 apps: 00:02:57.387 dumpcap: explicitly disabled via build config 00:02:57.387 graph: explicitly disabled via build config 00:02:57.387 pdump: explicitly disabled via build config 00:02:57.387 proc-info: explicitly disabled via build config 00:02:57.387 test-acl: explicitly disabled via build config 00:02:57.387 test-bbdev: explicitly disabled via build config 00:02:57.387 test-cmdline: explicitly disabled via build config 00:02:57.387 test-compress-perf: explicitly disabled via build config 00:02:57.387 test-crypto-perf: explicitly disabled via build config 00:02:57.387 test-dma-perf: explicitly disabled via build config 00:02:57.387 test-eventdev: explicitly disabled via build config 00:02:57.387 test-fib: explicitly disabled via build config 00:02:57.387 test-flow-perf: explicitly disabled via build config 00:02:57.387 test-gpudev: explicitly disabled via build config 00:02:57.387 test-mldev: explicitly disabled via build config 00:02:57.387 test-pipeline: explicitly disabled via build config 00:02:57.387 test-pmd: explicitly disabled via build config 00:02:57.387 test-regex: explicitly disabled via build config 00:02:57.387 test-sad: explicitly disabled via build config 00:02:57.387 test-security-perf: explicitly disabled via build config 00:02:57.387 00:02:57.387 libs: 00:02:57.387 argparse: explicitly disabled via build config 00:02:57.387 metrics: explicitly disabled via build config 00:02:57.387 acl: explicitly disabled via build config 00:02:57.387 bbdev: explicitly disabled via build config 00:02:57.387 bitratestats: explicitly disabled via build config 00:02:57.387 bpf: explicitly disabled via build config 00:02:57.387 cfgfile: explicitly disabled via build config 00:02:57.387 distributor: explicitly disabled via build config 00:02:57.387 efd: explicitly disabled via build config 00:02:57.387 eventdev: explicitly disabled via build config 00:02:57.387 dispatcher: explicitly disabled via build config 00:02:57.387 gpudev: explicitly disabled via build config 00:02:57.387 gro: explicitly disabled via build config 00:02:57.387 gso: explicitly disabled via build config 00:02:57.387 ip_frag: explicitly disabled via build config 00:02:57.387 jobstats: explicitly disabled via build config 00:02:57.387 latencystats: explicitly disabled via build config 00:02:57.387 lpm: explicitly disabled via build config 00:02:57.387 member: explicitly disabled via build config 00:02:57.387 pcapng: explicitly disabled via build config 00:02:57.387 rawdev: explicitly disabled via build config 00:02:57.387 regexdev: explicitly disabled via build config 00:02:57.387 mldev: explicitly disabled via build config 00:02:57.387 rib: explicitly disabled via build config 00:02:57.387 sched: explicitly disabled via build config 00:02:57.387 stack: explicitly disabled via build config 00:02:57.387 ipsec: explicitly disabled via build config 00:02:57.387 pdcp: explicitly disabled via build config 00:02:57.387 fib: explicitly disabled via build config 00:02:57.387 port: explicitly disabled via build config 00:02:57.387 pdump: explicitly disabled via build config 00:02:57.387 table: explicitly disabled via build config 00:02:57.387 pipeline: explicitly disabled via build config 00:02:57.387 graph: explicitly disabled via build config 00:02:57.387 node: explicitly disabled via build config 00:02:57.387 00:02:57.387 drivers: 00:02:57.387 common/cpt: not in enabled drivers build config 00:02:57.387 common/dpaax: not in enabled drivers build config 00:02:57.387 common/iavf: not in enabled drivers build config 00:02:57.387 common/idpf: not in enabled drivers build config 00:02:57.387 common/ionic: not in enabled drivers build config 00:02:57.387 common/mvep: not in enabled drivers build config 00:02:57.387 common/octeontx: not in enabled drivers build config 00:02:57.387 bus/auxiliary: not in enabled drivers build config 00:02:57.387 bus/cdx: not in enabled drivers build config 00:02:57.387 bus/dpaa: not in enabled drivers build config 00:02:57.387 bus/fslmc: not in enabled drivers build config 00:02:57.387 bus/ifpga: not in enabled drivers build config 00:02:57.387 bus/platform: not in enabled drivers build config 00:02:57.387 bus/uacce: not in enabled drivers build config 00:02:57.387 bus/vmbus: not in enabled drivers build config 00:02:57.387 common/cnxk: not in enabled drivers build config 00:02:57.387 common/mlx5: not in enabled drivers build config 00:02:57.387 common/nfp: not in enabled drivers build config 00:02:57.387 common/nitrox: not in enabled drivers build config 00:02:57.387 common/qat: not in enabled drivers build config 00:02:57.387 common/sfc_efx: not in enabled drivers build config 00:02:57.387 mempool/bucket: not in enabled drivers build config 00:02:57.388 mempool/cnxk: not in enabled drivers build config 00:02:57.388 mempool/dpaa: not in enabled drivers build config 00:02:57.388 mempool/dpaa2: not in enabled drivers build config 00:02:57.388 mempool/octeontx: not in enabled drivers build config 00:02:57.388 mempool/stack: not in enabled drivers build config 00:02:57.388 dma/cnxk: not in enabled drivers build config 00:02:57.388 dma/dpaa: not in enabled drivers build config 00:02:57.388 dma/dpaa2: not in enabled drivers build config 00:02:57.388 dma/hisilicon: not in enabled drivers build config 00:02:57.388 dma/idxd: not in enabled drivers build config 00:02:57.388 dma/ioat: not in enabled drivers build config 00:02:57.388 dma/skeleton: not in enabled drivers build config 00:02:57.388 net/af_packet: not in enabled drivers build config 00:02:57.388 net/af_xdp: not in enabled drivers build config 00:02:57.388 net/ark: not in enabled drivers build config 00:02:57.388 net/atlantic: not in enabled drivers build config 00:02:57.388 net/avp: not in enabled drivers build config 00:02:57.388 net/axgbe: not in enabled drivers build config 00:02:57.388 net/bnx2x: not in enabled drivers build config 00:02:57.388 net/bnxt: not in enabled drivers build config 00:02:57.388 net/bonding: not in enabled drivers build config 00:02:57.388 net/cnxk: not in enabled drivers build config 00:02:57.388 net/cpfl: not in enabled drivers build config 00:02:57.388 net/cxgbe: not in enabled drivers build config 00:02:57.388 net/dpaa: not in enabled drivers build config 00:02:57.388 net/dpaa2: not in enabled drivers build config 00:02:57.388 net/e1000: not in enabled drivers build config 00:02:57.388 net/ena: not in enabled drivers build config 00:02:57.388 net/enetc: not in enabled drivers build config 00:02:57.388 net/enetfec: not in enabled drivers build config 00:02:57.388 net/enic: not in enabled drivers build config 00:02:57.388 net/failsafe: not in enabled drivers build config 00:02:57.388 net/fm10k: not in enabled drivers build config 00:02:57.388 net/gve: not in enabled drivers build config 00:02:57.388 net/hinic: not in enabled drivers build config 00:02:57.388 net/hns3: not in enabled drivers build config 00:02:57.388 net/i40e: not in enabled drivers build config 00:02:57.388 net/iavf: not in enabled drivers build config 00:02:57.388 net/ice: not in enabled drivers build config 00:02:57.388 net/idpf: not in enabled drivers build config 00:02:57.388 net/igc: not in enabled drivers build config 00:02:57.388 net/ionic: not in enabled drivers build config 00:02:57.388 net/ipn3ke: not in enabled drivers build config 00:02:57.388 net/ixgbe: not in enabled drivers build config 00:02:57.388 net/mana: not in enabled drivers build config 00:02:57.388 net/memif: not in enabled drivers build config 00:02:57.388 net/mlx4: not in enabled drivers build config 00:02:57.388 net/mlx5: not in enabled drivers build config 00:02:57.388 net/mvneta: not in enabled drivers build config 00:02:57.388 net/mvpp2: not in enabled drivers build config 00:02:57.388 net/netvsc: not in enabled drivers build config 00:02:57.388 net/nfb: not in enabled drivers build config 00:02:57.388 net/nfp: not in enabled drivers build config 00:02:57.388 net/ngbe: not in enabled drivers build config 00:02:57.388 net/null: not in enabled drivers build config 00:02:57.388 net/octeontx: not in enabled drivers build config 00:02:57.388 net/octeon_ep: not in enabled drivers build config 00:02:57.388 net/pcap: not in enabled drivers build config 00:02:57.388 net/pfe: not in enabled drivers build config 00:02:57.388 net/qede: not in enabled drivers build config 00:02:57.388 net/ring: not in enabled drivers build config 00:02:57.388 net/sfc: not in enabled drivers build config 00:02:57.388 net/softnic: not in enabled drivers build config 00:02:57.388 net/tap: not in enabled drivers build config 00:02:57.388 net/thunderx: not in enabled drivers build config 00:02:57.388 net/txgbe: not in enabled drivers build config 00:02:57.388 net/vdev_netvsc: not in enabled drivers build config 00:02:57.388 net/vhost: not in enabled drivers build config 00:02:57.388 net/virtio: not in enabled drivers build config 00:02:57.388 net/vmxnet3: not in enabled drivers build config 00:02:57.388 raw/*: missing internal dependency, "rawdev" 00:02:57.388 crypto/armv8: not in enabled drivers build config 00:02:57.388 crypto/bcmfs: not in enabled drivers build config 00:02:57.388 crypto/caam_jr: not in enabled drivers build config 00:02:57.388 crypto/ccp: not in enabled drivers build config 00:02:57.388 crypto/cnxk: not in enabled drivers build config 00:02:57.388 crypto/dpaa_sec: not in enabled drivers build config 00:02:57.388 crypto/dpaa2_sec: not in enabled drivers build config 00:02:57.388 crypto/ipsec_mb: not in enabled drivers build config 00:02:57.388 crypto/mlx5: not in enabled drivers build config 00:02:57.388 crypto/mvsam: not in enabled drivers build config 00:02:57.388 crypto/nitrox: not in enabled drivers build config 00:02:57.388 crypto/null: not in enabled drivers build config 00:02:57.388 crypto/octeontx: not in enabled drivers build config 00:02:57.388 crypto/openssl: not in enabled drivers build config 00:02:57.388 crypto/scheduler: not in enabled drivers build config 00:02:57.388 crypto/uadk: not in enabled drivers build config 00:02:57.388 crypto/virtio: not in enabled drivers build config 00:02:57.388 compress/isal: not in enabled drivers build config 00:02:57.388 compress/mlx5: not in enabled drivers build config 00:02:57.388 compress/nitrox: not in enabled drivers build config 00:02:57.388 compress/octeontx: not in enabled drivers build config 00:02:57.388 compress/zlib: not in enabled drivers build config 00:02:57.388 regex/*: missing internal dependency, "regexdev" 00:02:57.388 ml/*: missing internal dependency, "mldev" 00:02:57.388 vdpa/ifc: not in enabled drivers build config 00:02:57.388 vdpa/mlx5: not in enabled drivers build config 00:02:57.388 vdpa/nfp: not in enabled drivers build config 00:02:57.388 vdpa/sfc: not in enabled drivers build config 00:02:57.388 event/*: missing internal dependency, "eventdev" 00:02:57.388 baseband/*: missing internal dependency, "bbdev" 00:02:57.388 gpu/*: missing internal dependency, "gpudev" 00:02:57.388 00:02:57.388 00:02:57.388 Build targets in project: 85 00:02:57.388 00:02:57.388 DPDK 24.03.0 00:02:57.388 00:02:57.388 User defined options 00:02:57.388 buildtype : debug 00:02:57.388 default_library : shared 00:02:57.388 libdir : lib 00:02:57.388 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:57.388 b_sanitize : address 00:02:57.388 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:57.388 c_link_args : 00:02:57.388 cpu_instruction_set: native 00:02:57.388 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:57.388 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:57.388 enable_docs : false 00:02:57.388 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:57.388 enable_kmods : false 00:02:57.388 max_lcores : 128 00:02:57.388 tests : false 00:02:57.388 00:02:57.388 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:57.388 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:57.388 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:57.388 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:57.388 [3/268] Linking static target lib/librte_kvargs.a 00:02:57.388 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:57.388 [5/268] Linking static target lib/librte_log.a 00:02:57.388 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:57.388 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.388 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:57.388 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:57.388 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:57.388 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:57.388 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:57.388 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:57.647 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:57.647 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:57.647 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:57.647 [17/268] Linking static target lib/librte_telemetry.a 00:02:57.647 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:57.906 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.906 [20/268] Linking target lib/librte_log.so.24.1 00:02:57.906 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:57.906 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:57.906 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:58.165 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:58.165 [25/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:58.165 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:58.165 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:58.165 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:58.165 [29/268] Linking target lib/librte_kvargs.so.24.1 00:02:58.165 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:58.165 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:58.165 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:58.424 [33/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:58.424 [34/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.424 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:58.424 [36/268] Linking target lib/librte_telemetry.so.24.1 00:02:58.684 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:58.684 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:58.684 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:58.684 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:58.684 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:58.684 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:58.684 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:58.684 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:58.684 [45/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:58.684 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:58.943 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:58.943 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:59.202 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:59.202 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:59.202 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:59.202 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:59.202 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:59.202 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:59.461 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:59.461 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:59.461 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:59.461 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:59.461 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:59.719 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:59.719 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:59.719 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:59.719 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:59.719 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:59.719 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:59.719 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:59.978 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:59.978 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:59.978 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:59.978 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:00.238 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:00.238 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:00.238 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:00.238 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:00.238 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:00.238 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:00.238 [77/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:00.238 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:00.497 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:00.497 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:00.497 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:00.497 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:00.497 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:00.497 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:00.497 [85/268] Linking static target lib/librte_ring.a 00:03:00.755 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:00.755 [87/268] Linking static target lib/librte_eal.a 00:03:00.755 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:00.755 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:00.755 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:00.755 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:01.015 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:01.015 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:01.015 [94/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.015 [95/268] Linking static target lib/librte_mempool.a 00:03:01.015 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:01.015 [97/268] Linking static target lib/librte_rcu.a 00:03:01.015 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:01.274 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:01.274 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:01.274 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:01.274 [102/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:01.533 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:01.533 [104/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:01.533 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:01.533 [106/268] Linking static target lib/librte_net.a 00:03:01.533 [107/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.533 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:01.533 [109/268] Linking static target lib/librte_mbuf.a 00:03:01.533 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:01.793 [111/268] Linking static target lib/librte_meter.a 00:03:01.793 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:01.793 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:01.793 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.052 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.052 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:02.052 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:02.052 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.311 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:02.311 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:02.571 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:02.571 [122/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.571 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:02.571 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:02.832 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:02.832 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:02.832 [127/268] Linking static target lib/librte_pci.a 00:03:02.832 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:02.832 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:02.832 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:02.832 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:03.103 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:03.103 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:03.103 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:03.103 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:03.103 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:03.103 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:03.103 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.103 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:03.103 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:03.103 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:03.376 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:03.376 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:03.376 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:03.376 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:03.376 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:03.376 [147/268] Linking static target lib/librte_cmdline.a 00:03:03.635 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:03.895 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:03.895 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:03.895 [151/268] Linking static target lib/librte_timer.a 00:03:03.895 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:03.895 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:03.895 [154/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:04.155 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:04.155 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:04.155 [157/268] Linking static target lib/librte_ethdev.a 00:03:04.415 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:04.415 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:04.415 [160/268] Linking static target lib/librte_hash.a 00:03:04.415 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:04.415 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:04.415 [163/268] Linking static target lib/librte_compressdev.a 00:03:04.415 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.415 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:04.674 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:04.674 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:04.674 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:04.674 [169/268] Linking static target lib/librte_dmadev.a 00:03:04.934 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:04.934 [171/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:04.934 [172/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.934 [173/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:05.194 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.194 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:05.453 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:05.453 [177/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.453 [178/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:05.453 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.453 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:05.453 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:05.712 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:05.712 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:05.712 [184/268] Linking static target lib/librte_cryptodev.a 00:03:05.712 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:05.712 [186/268] Linking static target lib/librte_power.a 00:03:05.971 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:05.971 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:05.971 [189/268] Linking static target lib/librte_reorder.a 00:03:05.971 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:06.230 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:06.230 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:06.230 [193/268] Linking static target lib/librte_security.a 00:03:06.488 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:06.488 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.747 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.747 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.005 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:07.005 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:07.005 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:07.005 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:07.263 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:07.263 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:07.263 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:07.521 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:07.521 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:07.521 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:07.521 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:07.521 [209/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:07.521 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:07.781 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.781 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:07.781 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:07.781 [214/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.781 [215/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.781 [216/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:07.781 [217/268] Linking static target drivers/librte_bus_vdev.a 00:03:07.781 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:07.781 [219/268] Linking static target drivers/librte_bus_pci.a 00:03:08.040 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:08.040 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:08.040 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.299 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:08.299 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.299 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:08.299 [226/268] Linking static target drivers/librte_mempool_ring.a 00:03:08.299 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.238 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:10.616 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.616 [230/268] Linking target lib/librte_eal.so.24.1 00:03:10.616 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:10.874 [232/268] Linking target lib/librte_ring.so.24.1 00:03:10.874 [233/268] Linking target lib/librte_pci.so.24.1 00:03:10.874 [234/268] Linking target lib/librte_meter.so.24.1 00:03:10.874 [235/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:10.874 [236/268] Linking target lib/librte_timer.so.24.1 00:03:10.874 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:10.874 [238/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:10.874 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:10.874 [240/268] Linking target lib/librte_mempool.so.24.1 00:03:10.874 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:10.874 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:10.874 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:10.874 [244/268] Linking target lib/librte_rcu.so.24.1 00:03:10.874 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:11.132 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:11.132 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:11.132 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:11.132 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:11.132 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:11.390 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:11.390 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:11.390 [253/268] Linking target lib/librte_net.so.24.1 00:03:11.390 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:11.390 [255/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:11.390 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:11.390 [257/268] Linking target lib/librte_hash.so.24.1 00:03:11.390 [258/268] Linking target lib/librte_security.so.24.1 00:03:11.390 [259/268] Linking target lib/librte_cmdline.so.24.1 00:03:11.652 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:12.236 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.495 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:12.495 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:12.753 [264/268] Linking target lib/librte_power.so.24.1 00:03:13.011 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:13.011 [266/268] Linking static target lib/librte_vhost.a 00:03:15.544 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.544 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:15.544 INFO: autodetecting backend as ninja 00:03:15.544 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:33.650 CC lib/ut/ut.o 00:03:33.650 CC lib/log/log.o 00:03:33.650 CC lib/log/log_flags.o 00:03:33.650 CC lib/log/log_deprecated.o 00:03:33.650 CC lib/ut_mock/mock.o 00:03:33.650 LIB libspdk_ut_mock.a 00:03:33.650 LIB libspdk_log.a 00:03:33.650 LIB libspdk_ut.a 00:03:33.650 SO libspdk_ut_mock.so.6.0 00:03:33.650 SO libspdk_ut.so.2.0 00:03:33.650 SO libspdk_log.so.7.1 00:03:33.650 SYMLINK libspdk_ut_mock.so 00:03:33.650 SYMLINK libspdk_ut.so 00:03:33.650 SYMLINK libspdk_log.so 00:03:33.650 CC lib/dma/dma.o 00:03:33.650 CC lib/ioat/ioat.o 00:03:33.650 CC lib/util/base64.o 00:03:33.650 CC lib/util/bit_array.o 00:03:33.650 CC lib/util/cpuset.o 00:03:33.650 CC lib/util/crc16.o 00:03:33.650 CC lib/util/crc32.o 00:03:33.650 CC lib/util/crc32c.o 00:03:33.650 CXX lib/trace_parser/trace.o 00:03:33.650 CC lib/vfio_user/host/vfio_user_pci.o 00:03:33.650 CC lib/util/crc32_ieee.o 00:03:33.650 CC lib/util/crc64.o 00:03:33.650 CC lib/util/dif.o 00:03:33.650 LIB libspdk_dma.a 00:03:33.650 CC lib/util/fd.o 00:03:33.650 SO libspdk_dma.so.5.0 00:03:33.650 CC lib/vfio_user/host/vfio_user.o 00:03:33.650 SYMLINK libspdk_dma.so 00:03:33.650 CC lib/util/fd_group.o 00:03:33.650 CC lib/util/file.o 00:03:33.650 CC lib/util/hexlify.o 00:03:33.650 LIB libspdk_ioat.a 00:03:33.650 CC lib/util/iov.o 00:03:33.650 SO libspdk_ioat.so.7.0 00:03:33.650 CC lib/util/math.o 00:03:33.650 SYMLINK libspdk_ioat.so 00:03:33.650 CC lib/util/net.o 00:03:33.650 CC lib/util/pipe.o 00:03:33.650 CC lib/util/strerror_tls.o 00:03:33.650 CC lib/util/string.o 00:03:33.650 LIB libspdk_vfio_user.a 00:03:33.650 SO libspdk_vfio_user.so.5.0 00:03:33.650 CC lib/util/uuid.o 00:03:33.650 CC lib/util/xor.o 00:03:33.650 CC lib/util/zipf.o 00:03:33.650 SYMLINK libspdk_vfio_user.so 00:03:33.650 CC lib/util/md5.o 00:03:33.910 LIB libspdk_util.a 00:03:34.169 SO libspdk_util.so.10.1 00:03:34.169 LIB libspdk_trace_parser.a 00:03:34.169 SO libspdk_trace_parser.so.6.0 00:03:34.169 SYMLINK libspdk_util.so 00:03:34.428 SYMLINK libspdk_trace_parser.so 00:03:34.428 CC lib/vmd/vmd.o 00:03:34.428 CC lib/vmd/led.o 00:03:34.428 CC lib/env_dpdk/env.o 00:03:34.428 CC lib/env_dpdk/memory.o 00:03:34.428 CC lib/env_dpdk/pci.o 00:03:34.428 CC lib/env_dpdk/init.o 00:03:34.428 CC lib/json/json_parse.o 00:03:34.428 CC lib/idxd/idxd.o 00:03:34.428 CC lib/rdma_utils/rdma_utils.o 00:03:34.428 CC lib/conf/conf.o 00:03:34.687 CC lib/idxd/idxd_user.o 00:03:34.687 CC lib/json/json_util.o 00:03:34.687 LIB libspdk_conf.a 00:03:34.687 SO libspdk_conf.so.6.0 00:03:34.687 LIB libspdk_rdma_utils.a 00:03:34.687 SO libspdk_rdma_utils.so.1.0 00:03:34.687 SYMLINK libspdk_conf.so 00:03:34.687 CC lib/idxd/idxd_kernel.o 00:03:34.946 SYMLINK libspdk_rdma_utils.so 00:03:34.946 CC lib/json/json_write.o 00:03:34.946 CC lib/env_dpdk/threads.o 00:03:34.946 CC lib/env_dpdk/pci_ioat.o 00:03:34.946 CC lib/env_dpdk/pci_virtio.o 00:03:34.946 CC lib/env_dpdk/pci_vmd.o 00:03:34.946 CC lib/env_dpdk/pci_idxd.o 00:03:34.946 CC lib/env_dpdk/pci_event.o 00:03:34.946 CC lib/env_dpdk/sigbus_handler.o 00:03:34.946 CC lib/env_dpdk/pci_dpdk.o 00:03:35.205 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:35.205 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:35.205 LIB libspdk_json.a 00:03:35.205 CC lib/rdma_provider/common.o 00:03:35.205 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:35.205 SO libspdk_json.so.6.0 00:03:35.205 LIB libspdk_idxd.a 00:03:35.205 LIB libspdk_vmd.a 00:03:35.205 SYMLINK libspdk_json.so 00:03:35.205 SO libspdk_idxd.so.12.1 00:03:35.205 SO libspdk_vmd.so.6.0 00:03:35.205 SYMLINK libspdk_idxd.so 00:03:35.205 SYMLINK libspdk_vmd.so 00:03:35.464 LIB libspdk_rdma_provider.a 00:03:35.464 SO libspdk_rdma_provider.so.7.0 00:03:35.464 SYMLINK libspdk_rdma_provider.so 00:03:35.464 CC lib/jsonrpc/jsonrpc_server.o 00:03:35.464 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:35.464 CC lib/jsonrpc/jsonrpc_client.o 00:03:35.464 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:35.723 LIB libspdk_jsonrpc.a 00:03:35.983 SO libspdk_jsonrpc.so.6.0 00:03:35.983 SYMLINK libspdk_jsonrpc.so 00:03:36.267 LIB libspdk_env_dpdk.a 00:03:36.267 SO libspdk_env_dpdk.so.15.1 00:03:36.527 CC lib/rpc/rpc.o 00:03:36.527 SYMLINK libspdk_env_dpdk.so 00:03:36.787 LIB libspdk_rpc.a 00:03:36.787 SO libspdk_rpc.so.6.0 00:03:36.787 SYMLINK libspdk_rpc.so 00:03:37.046 CC lib/keyring/keyring.o 00:03:37.046 CC lib/keyring/keyring_rpc.o 00:03:37.306 CC lib/notify/notify.o 00:03:37.306 CC lib/notify/notify_rpc.o 00:03:37.306 CC lib/trace/trace.o 00:03:37.306 CC lib/trace/trace_flags.o 00:03:37.306 CC lib/trace/trace_rpc.o 00:03:37.306 LIB libspdk_notify.a 00:03:37.306 SO libspdk_notify.so.6.0 00:03:37.306 LIB libspdk_keyring.a 00:03:37.565 SO libspdk_keyring.so.2.0 00:03:37.565 SYMLINK libspdk_notify.so 00:03:37.565 LIB libspdk_trace.a 00:03:37.565 SO libspdk_trace.so.11.0 00:03:37.565 SYMLINK libspdk_keyring.so 00:03:37.565 SYMLINK libspdk_trace.so 00:03:38.133 CC lib/thread/thread.o 00:03:38.133 CC lib/thread/iobuf.o 00:03:38.133 CC lib/sock/sock.o 00:03:38.133 CC lib/sock/sock_rpc.o 00:03:38.393 LIB libspdk_sock.a 00:03:38.652 SO libspdk_sock.so.10.0 00:03:38.653 SYMLINK libspdk_sock.so 00:03:39.222 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:39.222 CC lib/nvme/nvme_ctrlr.o 00:03:39.222 CC lib/nvme/nvme_fabric.o 00:03:39.222 CC lib/nvme/nvme_ns_cmd.o 00:03:39.222 CC lib/nvme/nvme_ns.o 00:03:39.222 CC lib/nvme/nvme_pcie_common.o 00:03:39.222 CC lib/nvme/nvme_pcie.o 00:03:39.222 CC lib/nvme/nvme.o 00:03:39.222 CC lib/nvme/nvme_qpair.o 00:03:39.791 CC lib/nvme/nvme_quirks.o 00:03:39.791 CC lib/nvme/nvme_transport.o 00:03:39.791 LIB libspdk_thread.a 00:03:39.791 CC lib/nvme/nvme_discovery.o 00:03:39.791 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:39.791 SO libspdk_thread.so.11.0 00:03:40.051 SYMLINK libspdk_thread.so 00:03:40.051 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:40.051 CC lib/nvme/nvme_tcp.o 00:03:40.051 CC lib/nvme/nvme_opal.o 00:03:40.051 CC lib/nvme/nvme_io_msg.o 00:03:40.310 CC lib/nvme/nvme_poll_group.o 00:03:40.310 CC lib/nvme/nvme_zns.o 00:03:40.310 CC lib/nvme/nvme_stubs.o 00:03:40.310 CC lib/nvme/nvme_auth.o 00:03:40.570 CC lib/nvme/nvme_cuse.o 00:03:40.570 CC lib/nvme/nvme_rdma.o 00:03:40.830 CC lib/accel/accel.o 00:03:40.830 CC lib/blob/blobstore.o 00:03:40.830 CC lib/blob/request.o 00:03:40.830 CC lib/blob/zeroes.o 00:03:40.830 CC lib/blob/blob_bs_dev.o 00:03:41.089 CC lib/accel/accel_rpc.o 00:03:41.089 CC lib/accel/accel_sw.o 00:03:41.349 CC lib/init/json_config.o 00:03:41.349 CC lib/virtio/virtio.o 00:03:41.349 CC lib/init/subsystem.o 00:03:41.349 CC lib/virtio/virtio_vhost_user.o 00:03:41.609 CC lib/fsdev/fsdev.o 00:03:41.609 CC lib/init/subsystem_rpc.o 00:03:41.609 CC lib/init/rpc.o 00:03:41.609 CC lib/fsdev/fsdev_io.o 00:03:41.609 CC lib/fsdev/fsdev_rpc.o 00:03:41.609 CC lib/virtio/virtio_vfio_user.o 00:03:41.609 CC lib/virtio/virtio_pci.o 00:03:41.869 LIB libspdk_init.a 00:03:41.869 SO libspdk_init.so.6.0 00:03:41.869 SYMLINK libspdk_init.so 00:03:41.869 LIB libspdk_accel.a 00:03:42.128 LIB libspdk_virtio.a 00:03:42.128 SO libspdk_accel.so.16.0 00:03:42.128 SO libspdk_virtio.so.7.0 00:03:42.128 SYMLINK libspdk_accel.so 00:03:42.128 CC lib/event/app_rpc.o 00:03:42.128 CC lib/event/app.o 00:03:42.128 CC lib/event/reactor.o 00:03:42.128 CC lib/event/scheduler_static.o 00:03:42.128 CC lib/event/log_rpc.o 00:03:42.128 LIB libspdk_nvme.a 00:03:42.128 SYMLINK libspdk_virtio.so 00:03:42.388 LIB libspdk_fsdev.a 00:03:42.388 SO libspdk_fsdev.so.2.0 00:03:42.388 SO libspdk_nvme.so.15.0 00:03:42.388 CC lib/bdev/bdev.o 00:03:42.388 SYMLINK libspdk_fsdev.so 00:03:42.388 CC lib/bdev/bdev_rpc.o 00:03:42.388 CC lib/bdev/bdev_zone.o 00:03:42.388 CC lib/bdev/part.o 00:03:42.388 CC lib/bdev/scsi_nvme.o 00:03:42.647 SYMLINK libspdk_nvme.so 00:03:42.647 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:42.647 LIB libspdk_event.a 00:03:42.915 SO libspdk_event.so.14.0 00:03:42.915 SYMLINK libspdk_event.so 00:03:43.187 LIB libspdk_fuse_dispatcher.a 00:03:43.447 SO libspdk_fuse_dispatcher.so.1.0 00:03:43.447 SYMLINK libspdk_fuse_dispatcher.so 00:03:44.386 LIB libspdk_blob.a 00:03:44.386 SO libspdk_blob.so.12.0 00:03:44.646 SYMLINK libspdk_blob.so 00:03:44.905 CC lib/lvol/lvol.o 00:03:44.905 CC lib/blobfs/blobfs.o 00:03:44.905 CC lib/blobfs/tree.o 00:03:45.165 LIB libspdk_bdev.a 00:03:45.424 SO libspdk_bdev.so.17.0 00:03:45.424 SYMLINK libspdk_bdev.so 00:03:45.684 CC lib/scsi/dev.o 00:03:45.684 CC lib/scsi/lun.o 00:03:45.684 CC lib/scsi/scsi.o 00:03:45.684 CC lib/ublk/ublk.o 00:03:45.684 CC lib/scsi/port.o 00:03:45.684 CC lib/nvmf/ctrlr.o 00:03:45.684 CC lib/nbd/nbd.o 00:03:45.684 CC lib/ftl/ftl_core.o 00:03:45.943 CC lib/nvmf/ctrlr_discovery.o 00:03:45.943 CC lib/nvmf/ctrlr_bdev.o 00:03:45.943 LIB libspdk_blobfs.a 00:03:45.943 SO libspdk_blobfs.so.11.0 00:03:45.943 CC lib/ublk/ublk_rpc.o 00:03:45.943 SYMLINK libspdk_blobfs.so 00:03:45.943 CC lib/scsi/scsi_bdev.o 00:03:45.943 LIB libspdk_lvol.a 00:03:45.943 CC lib/scsi/scsi_pr.o 00:03:45.943 SO libspdk_lvol.so.11.0 00:03:46.203 SYMLINK libspdk_lvol.so 00:03:46.203 CC lib/scsi/scsi_rpc.o 00:03:46.203 CC lib/scsi/task.o 00:03:46.203 CC lib/ftl/ftl_init.o 00:03:46.203 CC lib/nbd/nbd_rpc.o 00:03:46.203 CC lib/ftl/ftl_layout.o 00:03:46.203 CC lib/ftl/ftl_debug.o 00:03:46.463 CC lib/ftl/ftl_io.o 00:03:46.463 CC lib/nvmf/subsystem.o 00:03:46.463 LIB libspdk_nbd.a 00:03:46.463 SO libspdk_nbd.so.7.0 00:03:46.463 CC lib/nvmf/nvmf.o 00:03:46.463 LIB libspdk_ublk.a 00:03:46.463 SO libspdk_ublk.so.3.0 00:03:46.463 SYMLINK libspdk_nbd.so 00:03:46.463 CC lib/ftl/ftl_sb.o 00:03:46.463 SYMLINK libspdk_ublk.so 00:03:46.463 CC lib/ftl/ftl_l2p.o 00:03:46.463 LIB libspdk_scsi.a 00:03:46.463 CC lib/ftl/ftl_l2p_flat.o 00:03:46.463 CC lib/ftl/ftl_nv_cache.o 00:03:46.463 SO libspdk_scsi.so.9.0 00:03:46.721 CC lib/nvmf/nvmf_rpc.o 00:03:46.721 SYMLINK libspdk_scsi.so 00:03:46.721 CC lib/nvmf/transport.o 00:03:46.721 CC lib/nvmf/tcp.o 00:03:46.721 CC lib/ftl/ftl_band.o 00:03:46.721 CC lib/nvmf/stubs.o 00:03:46.721 CC lib/ftl/ftl_band_ops.o 00:03:46.980 CC lib/ftl/ftl_writer.o 00:03:46.980 CC lib/nvmf/mdns_server.o 00:03:47.239 CC lib/nvmf/rdma.o 00:03:47.239 CC lib/nvmf/auth.o 00:03:47.239 CC lib/ftl/ftl_rq.o 00:03:47.498 CC lib/ftl/ftl_reloc.o 00:03:47.498 CC lib/ftl/ftl_l2p_cache.o 00:03:47.498 CC lib/ftl/ftl_p2l.o 00:03:47.498 CC lib/ftl/ftl_p2l_log.o 00:03:47.758 CC lib/iscsi/conn.o 00:03:47.758 CC lib/iscsi/init_grp.o 00:03:47.758 CC lib/ftl/mngt/ftl_mngt.o 00:03:47.758 CC lib/vhost/vhost.o 00:03:48.017 CC lib/iscsi/iscsi.o 00:03:48.017 CC lib/vhost/vhost_rpc.o 00:03:48.017 CC lib/iscsi/param.o 00:03:48.017 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:48.017 CC lib/iscsi/portal_grp.o 00:03:48.277 CC lib/iscsi/tgt_node.o 00:03:48.277 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:48.277 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:48.277 CC lib/iscsi/iscsi_subsystem.o 00:03:48.277 CC lib/iscsi/iscsi_rpc.o 00:03:48.537 CC lib/iscsi/task.o 00:03:48.537 CC lib/vhost/vhost_scsi.o 00:03:48.537 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:48.537 CC lib/vhost/vhost_blk.o 00:03:48.797 CC lib/vhost/rte_vhost_user.o 00:03:48.797 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:48.797 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:48.797 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:48.797 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:48.797 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:49.057 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:49.057 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:49.057 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:49.057 CC lib/ftl/utils/ftl_conf.o 00:03:49.057 CC lib/ftl/utils/ftl_md.o 00:03:49.057 CC lib/ftl/utils/ftl_mempool.o 00:03:49.057 CC lib/ftl/utils/ftl_bitmap.o 00:03:49.317 CC lib/ftl/utils/ftl_property.o 00:03:49.317 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:49.317 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:49.317 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:49.317 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:49.317 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:49.317 LIB libspdk_iscsi.a 00:03:49.577 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:49.577 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:49.577 SO libspdk_iscsi.so.8.0 00:03:49.577 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:49.577 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:49.577 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:49.577 LIB libspdk_nvmf.a 00:03:49.577 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:49.577 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:49.577 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:49.577 SYMLINK libspdk_iscsi.so 00:03:49.577 LIB libspdk_vhost.a 00:03:49.577 CC lib/ftl/base/ftl_base_dev.o 00:03:49.577 CC lib/ftl/base/ftl_base_bdev.o 00:03:49.577 SO libspdk_nvmf.so.20.0 00:03:49.837 SO libspdk_vhost.so.8.0 00:03:49.837 CC lib/ftl/ftl_trace.o 00:03:49.837 SYMLINK libspdk_vhost.so 00:03:49.837 SYMLINK libspdk_nvmf.so 00:03:50.097 LIB libspdk_ftl.a 00:03:50.097 SO libspdk_ftl.so.9.0 00:03:50.359 SYMLINK libspdk_ftl.so 00:03:50.958 CC module/env_dpdk/env_dpdk_rpc.o 00:03:50.958 CC module/sock/posix/posix.o 00:03:50.958 CC module/keyring/file/keyring.o 00:03:50.958 CC module/accel/error/accel_error.o 00:03:50.958 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:50.958 CC module/fsdev/aio/fsdev_aio.o 00:03:50.958 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:50.958 CC module/scheduler/gscheduler/gscheduler.o 00:03:50.958 CC module/keyring/linux/keyring.o 00:03:50.958 CC module/blob/bdev/blob_bdev.o 00:03:50.958 LIB libspdk_env_dpdk_rpc.a 00:03:50.958 SO libspdk_env_dpdk_rpc.so.6.0 00:03:50.958 SYMLINK libspdk_env_dpdk_rpc.so 00:03:50.958 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:50.958 CC module/keyring/file/keyring_rpc.o 00:03:50.958 CC module/keyring/linux/keyring_rpc.o 00:03:50.958 LIB libspdk_scheduler_gscheduler.a 00:03:50.958 LIB libspdk_scheduler_dpdk_governor.a 00:03:51.218 SO libspdk_scheduler_gscheduler.so.4.0 00:03:51.218 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:51.218 LIB libspdk_scheduler_dynamic.a 00:03:51.218 CC module/accel/error/accel_error_rpc.o 00:03:51.218 SO libspdk_scheduler_dynamic.so.4.0 00:03:51.218 SYMLINK libspdk_scheduler_gscheduler.so 00:03:51.218 CC module/fsdev/aio/linux_aio_mgr.o 00:03:51.218 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:51.218 LIB libspdk_keyring_linux.a 00:03:51.218 LIB libspdk_keyring_file.a 00:03:51.218 SYMLINK libspdk_scheduler_dynamic.so 00:03:51.218 LIB libspdk_blob_bdev.a 00:03:51.218 SO libspdk_keyring_linux.so.1.0 00:03:51.218 SO libspdk_keyring_file.so.2.0 00:03:51.218 SO libspdk_blob_bdev.so.12.0 00:03:51.218 LIB libspdk_accel_error.a 00:03:51.218 SYMLINK libspdk_keyring_linux.so 00:03:51.218 SYMLINK libspdk_keyring_file.so 00:03:51.218 SYMLINK libspdk_blob_bdev.so 00:03:51.218 SO libspdk_accel_error.so.2.0 00:03:51.218 CC module/accel/ioat/accel_ioat.o 00:03:51.218 CC module/accel/ioat/accel_ioat_rpc.o 00:03:51.218 SYMLINK libspdk_accel_error.so 00:03:51.218 CC module/accel/dsa/accel_dsa.o 00:03:51.218 CC module/accel/iaa/accel_iaa.o 00:03:51.218 CC module/accel/dsa/accel_dsa_rpc.o 00:03:51.218 CC module/accel/iaa/accel_iaa_rpc.o 00:03:51.478 LIB libspdk_accel_ioat.a 00:03:51.478 CC module/bdev/delay/vbdev_delay.o 00:03:51.478 CC module/blobfs/bdev/blobfs_bdev.o 00:03:51.478 LIB libspdk_accel_iaa.a 00:03:51.478 SO libspdk_accel_ioat.so.6.0 00:03:51.478 SO libspdk_accel_iaa.so.3.0 00:03:51.478 CC module/bdev/error/vbdev_error.o 00:03:51.478 SYMLINK libspdk_accel_ioat.so 00:03:51.478 CC module/bdev/error/vbdev_error_rpc.o 00:03:51.478 CC module/bdev/gpt/gpt.o 00:03:51.478 LIB libspdk_accel_dsa.a 00:03:51.478 CC module/bdev/lvol/vbdev_lvol.o 00:03:51.738 LIB libspdk_fsdev_aio.a 00:03:51.738 SYMLINK libspdk_accel_iaa.so 00:03:51.738 CC module/bdev/gpt/vbdev_gpt.o 00:03:51.738 SO libspdk_accel_dsa.so.5.0 00:03:51.738 SO libspdk_fsdev_aio.so.1.0 00:03:51.738 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:51.738 SYMLINK libspdk_accel_dsa.so 00:03:51.738 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:51.738 LIB libspdk_sock_posix.a 00:03:51.738 SYMLINK libspdk_fsdev_aio.so 00:03:51.738 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:51.738 SO libspdk_sock_posix.so.6.0 00:03:51.738 LIB libspdk_blobfs_bdev.a 00:03:51.738 SYMLINK libspdk_sock_posix.so 00:03:51.997 LIB libspdk_bdev_error.a 00:03:51.997 SO libspdk_blobfs_bdev.so.6.0 00:03:51.997 LIB libspdk_bdev_gpt.a 00:03:51.997 LIB libspdk_bdev_delay.a 00:03:51.997 SO libspdk_bdev_error.so.6.0 00:03:51.997 CC module/bdev/malloc/bdev_malloc.o 00:03:51.997 SO libspdk_bdev_gpt.so.6.0 00:03:51.997 SYMLINK libspdk_blobfs_bdev.so 00:03:51.997 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:51.997 SO libspdk_bdev_delay.so.6.0 00:03:51.997 SYMLINK libspdk_bdev_error.so 00:03:51.997 SYMLINK libspdk_bdev_gpt.so 00:03:51.997 SYMLINK libspdk_bdev_delay.so 00:03:51.997 CC module/bdev/null/bdev_null.o 00:03:51.997 CC module/bdev/null/bdev_null_rpc.o 00:03:51.997 CC module/bdev/nvme/bdev_nvme.o 00:03:51.997 CC module/bdev/passthru/vbdev_passthru.o 00:03:51.997 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:51.997 CC module/bdev/nvme/nvme_rpc.o 00:03:51.997 CC module/bdev/raid/bdev_raid.o 00:03:52.257 CC module/bdev/split/vbdev_split.o 00:03:52.257 LIB libspdk_bdev_lvol.a 00:03:52.257 CC module/bdev/nvme/bdev_mdns_client.o 00:03:52.257 SO libspdk_bdev_lvol.so.6.0 00:03:52.257 SYMLINK libspdk_bdev_lvol.so 00:03:52.257 CC module/bdev/nvme/vbdev_opal.o 00:03:52.257 LIB libspdk_bdev_null.a 00:03:52.257 SO libspdk_bdev_null.so.6.0 00:03:52.257 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:52.257 LIB libspdk_bdev_malloc.a 00:03:52.257 SYMLINK libspdk_bdev_null.so 00:03:52.257 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:52.257 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:52.257 SO libspdk_bdev_malloc.so.6.0 00:03:52.257 CC module/bdev/split/vbdev_split_rpc.o 00:03:52.517 SYMLINK libspdk_bdev_malloc.so 00:03:52.517 LIB libspdk_bdev_passthru.a 00:03:52.517 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:52.517 CC module/bdev/raid/bdev_raid_rpc.o 00:03:52.517 LIB libspdk_bdev_split.a 00:03:52.517 SO libspdk_bdev_passthru.so.6.0 00:03:52.517 SO libspdk_bdev_split.so.6.0 00:03:52.517 CC module/bdev/aio/bdev_aio.o 00:03:52.517 SYMLINK libspdk_bdev_passthru.so 00:03:52.517 SYMLINK libspdk_bdev_split.so 00:03:52.517 CC module/bdev/aio/bdev_aio_rpc.o 00:03:52.777 CC module/bdev/iscsi/bdev_iscsi.o 00:03:52.777 CC module/bdev/ftl/bdev_ftl.o 00:03:52.777 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:52.777 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:52.777 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:52.777 CC module/bdev/raid/bdev_raid_sb.o 00:03:52.777 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:52.777 LIB libspdk_bdev_aio.a 00:03:53.036 CC module/bdev/raid/raid0.o 00:03:53.037 SO libspdk_bdev_aio.so.6.0 00:03:53.037 CC module/bdev/raid/raid1.o 00:03:53.037 LIB libspdk_bdev_zone_block.a 00:03:53.037 LIB libspdk_bdev_ftl.a 00:03:53.037 SO libspdk_bdev_zone_block.so.6.0 00:03:53.037 SO libspdk_bdev_ftl.so.6.0 00:03:53.037 SYMLINK libspdk_bdev_aio.so 00:03:53.037 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:53.037 LIB libspdk_bdev_iscsi.a 00:03:53.037 SYMLINK libspdk_bdev_ftl.so 00:03:53.037 CC module/bdev/raid/concat.o 00:03:53.037 SYMLINK libspdk_bdev_zone_block.so 00:03:53.037 CC module/bdev/raid/raid5f.o 00:03:53.037 SO libspdk_bdev_iscsi.so.6.0 00:03:53.037 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:53.037 SYMLINK libspdk_bdev_iscsi.so 00:03:53.296 LIB libspdk_bdev_virtio.a 00:03:53.555 SO libspdk_bdev_virtio.so.6.0 00:03:53.556 SYMLINK libspdk_bdev_virtio.so 00:03:53.556 LIB libspdk_bdev_raid.a 00:03:53.556 SO libspdk_bdev_raid.so.6.0 00:03:53.815 SYMLINK libspdk_bdev_raid.so 00:03:54.754 LIB libspdk_bdev_nvme.a 00:03:54.754 SO libspdk_bdev_nvme.so.7.1 00:03:54.754 SYMLINK libspdk_bdev_nvme.so 00:03:55.693 CC module/event/subsystems/fsdev/fsdev.o 00:03:55.693 CC module/event/subsystems/keyring/keyring.o 00:03:55.693 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:55.693 CC module/event/subsystems/iobuf/iobuf.o 00:03:55.693 CC module/event/subsystems/sock/sock.o 00:03:55.693 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:55.693 CC module/event/subsystems/scheduler/scheduler.o 00:03:55.693 CC module/event/subsystems/vmd/vmd.o 00:03:55.693 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:55.693 LIB libspdk_event_fsdev.a 00:03:55.693 LIB libspdk_event_vhost_blk.a 00:03:55.693 LIB libspdk_event_sock.a 00:03:55.693 LIB libspdk_event_scheduler.a 00:03:55.693 LIB libspdk_event_vmd.a 00:03:55.693 LIB libspdk_event_keyring.a 00:03:55.693 LIB libspdk_event_iobuf.a 00:03:55.693 SO libspdk_event_fsdev.so.1.0 00:03:55.693 SO libspdk_event_vhost_blk.so.3.0 00:03:55.693 SO libspdk_event_sock.so.5.0 00:03:55.693 SO libspdk_event_scheduler.so.4.0 00:03:55.693 SO libspdk_event_keyring.so.1.0 00:03:55.693 SO libspdk_event_vmd.so.6.0 00:03:55.693 SO libspdk_event_iobuf.so.3.0 00:03:55.693 SYMLINK libspdk_event_fsdev.so 00:03:55.693 SYMLINK libspdk_event_sock.so 00:03:55.693 SYMLINK libspdk_event_scheduler.so 00:03:55.693 SYMLINK libspdk_event_vhost_blk.so 00:03:55.693 SYMLINK libspdk_event_keyring.so 00:03:55.693 SYMLINK libspdk_event_vmd.so 00:03:55.693 SYMLINK libspdk_event_iobuf.so 00:03:55.953 CC module/event/subsystems/accel/accel.o 00:03:56.213 LIB libspdk_event_accel.a 00:03:56.213 SO libspdk_event_accel.so.6.0 00:03:56.213 SYMLINK libspdk_event_accel.so 00:03:56.783 CC module/event/subsystems/bdev/bdev.o 00:03:56.783 LIB libspdk_event_bdev.a 00:03:57.043 SO libspdk_event_bdev.so.6.0 00:03:57.043 SYMLINK libspdk_event_bdev.so 00:03:57.303 CC module/event/subsystems/nbd/nbd.o 00:03:57.303 CC module/event/subsystems/ublk/ublk.o 00:03:57.303 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:57.303 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:57.303 CC module/event/subsystems/scsi/scsi.o 00:03:57.303 LIB libspdk_event_nbd.a 00:03:57.303 SO libspdk_event_nbd.so.6.0 00:03:57.562 LIB libspdk_event_ublk.a 00:03:57.562 SYMLINK libspdk_event_nbd.so 00:03:57.562 SO libspdk_event_ublk.so.3.0 00:03:57.562 LIB libspdk_event_scsi.a 00:03:57.562 LIB libspdk_event_nvmf.a 00:03:57.562 SO libspdk_event_scsi.so.6.0 00:03:57.562 SYMLINK libspdk_event_ublk.so 00:03:57.562 SO libspdk_event_nvmf.so.6.0 00:03:57.562 SYMLINK libspdk_event_scsi.so 00:03:57.562 SYMLINK libspdk_event_nvmf.so 00:03:58.130 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.130 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.130 LIB libspdk_event_iscsi.a 00:03:58.130 LIB libspdk_event_vhost_scsi.a 00:03:58.130 SO libspdk_event_iscsi.so.6.0 00:03:58.130 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.130 SYMLINK libspdk_event_iscsi.so 00:03:58.389 SYMLINK libspdk_event_vhost_scsi.so 00:03:58.389 SO libspdk.so.6.0 00:03:58.389 SYMLINK libspdk.so 00:03:58.959 CC app/spdk_lspci/spdk_lspci.o 00:03:58.959 CXX app/trace/trace.o 00:03:58.959 CC app/trace_record/trace_record.o 00:03:58.959 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:58.959 CC app/nvmf_tgt/nvmf_main.o 00:03:58.959 CC app/iscsi_tgt/iscsi_tgt.o 00:03:58.959 CC examples/ioat/perf/perf.o 00:03:58.959 CC app/spdk_tgt/spdk_tgt.o 00:03:58.959 CC examples/util/zipf/zipf.o 00:03:58.959 CC test/thread/poller_perf/poller_perf.o 00:03:58.959 LINK spdk_lspci 00:03:58.959 LINK nvmf_tgt 00:03:58.959 LINK interrupt_tgt 00:03:58.959 LINK zipf 00:03:58.959 LINK poller_perf 00:03:58.959 LINK iscsi_tgt 00:03:58.959 LINK spdk_trace_record 00:03:58.959 LINK spdk_tgt 00:03:59.219 LINK ioat_perf 00:03:59.219 CC examples/ioat/verify/verify.o 00:03:59.219 LINK spdk_trace 00:03:59.219 CC app/spdk_nvme_perf/perf.o 00:03:59.219 CC app/spdk_nvme_identify/identify.o 00:03:59.219 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.219 CC app/spdk_top/spdk_top.o 00:03:59.478 CC app/spdk_dd/spdk_dd.o 00:03:59.478 LINK verify 00:03:59.478 CC examples/thread/thread/thread_ex.o 00:03:59.478 CC test/dma/test_dma/test_dma.o 00:03:59.478 CC examples/sock/hello_world/hello_sock.o 00:03:59.478 CC app/fio/nvme/fio_plugin.o 00:03:59.478 LINK spdk_nvme_discover 00:03:59.737 LINK thread 00:03:59.737 LINK hello_sock 00:03:59.737 CC test/app/bdev_svc/bdev_svc.o 00:03:59.737 LINK spdk_dd 00:03:59.737 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:59.737 LINK bdev_svc 00:03:59.737 TEST_HEADER include/spdk/accel.h 00:03:59.737 LINK test_dma 00:03:59.737 TEST_HEADER include/spdk/accel_module.h 00:03:59.737 TEST_HEADER include/spdk/assert.h 00:03:59.737 TEST_HEADER include/spdk/barrier.h 00:03:59.737 TEST_HEADER include/spdk/base64.h 00:03:59.737 TEST_HEADER include/spdk/bdev.h 00:03:59.737 TEST_HEADER include/spdk/bdev_module.h 00:03:59.996 TEST_HEADER include/spdk/bdev_zone.h 00:03:59.996 TEST_HEADER include/spdk/bit_array.h 00:03:59.996 TEST_HEADER include/spdk/bit_pool.h 00:03:59.996 TEST_HEADER include/spdk/blob_bdev.h 00:03:59.996 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:59.996 TEST_HEADER include/spdk/blobfs.h 00:03:59.996 TEST_HEADER include/spdk/blob.h 00:03:59.996 TEST_HEADER include/spdk/conf.h 00:03:59.996 TEST_HEADER include/spdk/config.h 00:03:59.996 TEST_HEADER include/spdk/cpuset.h 00:03:59.996 TEST_HEADER include/spdk/crc16.h 00:03:59.996 TEST_HEADER include/spdk/crc32.h 00:03:59.996 TEST_HEADER include/spdk/crc64.h 00:03:59.996 TEST_HEADER include/spdk/dif.h 00:03:59.996 TEST_HEADER include/spdk/dma.h 00:03:59.996 TEST_HEADER include/spdk/endian.h 00:03:59.996 TEST_HEADER include/spdk/env_dpdk.h 00:03:59.996 TEST_HEADER include/spdk/env.h 00:03:59.996 TEST_HEADER include/spdk/event.h 00:03:59.996 TEST_HEADER include/spdk/fd_group.h 00:03:59.996 TEST_HEADER include/spdk/fd.h 00:03:59.996 TEST_HEADER include/spdk/file.h 00:03:59.996 TEST_HEADER include/spdk/fsdev.h 00:03:59.997 TEST_HEADER include/spdk/fsdev_module.h 00:03:59.997 TEST_HEADER include/spdk/ftl.h 00:03:59.997 TEST_HEADER include/spdk/gpt_spec.h 00:03:59.997 TEST_HEADER include/spdk/hexlify.h 00:03:59.997 TEST_HEADER include/spdk/histogram_data.h 00:03:59.997 TEST_HEADER include/spdk/idxd.h 00:03:59.997 TEST_HEADER include/spdk/idxd_spec.h 00:03:59.997 TEST_HEADER include/spdk/init.h 00:03:59.997 TEST_HEADER include/spdk/ioat.h 00:03:59.997 TEST_HEADER include/spdk/ioat_spec.h 00:03:59.997 TEST_HEADER include/spdk/iscsi_spec.h 00:03:59.997 TEST_HEADER include/spdk/json.h 00:03:59.997 TEST_HEADER include/spdk/jsonrpc.h 00:03:59.997 TEST_HEADER include/spdk/keyring.h 00:03:59.997 TEST_HEADER include/spdk/keyring_module.h 00:03:59.997 TEST_HEADER include/spdk/likely.h 00:03:59.997 CC examples/vmd/lsvmd/lsvmd.o 00:03:59.997 TEST_HEADER include/spdk/log.h 00:03:59.997 TEST_HEADER include/spdk/lvol.h 00:03:59.997 TEST_HEADER include/spdk/md5.h 00:03:59.997 TEST_HEADER include/spdk/memory.h 00:03:59.997 TEST_HEADER include/spdk/mmio.h 00:03:59.997 TEST_HEADER include/spdk/nbd.h 00:03:59.997 TEST_HEADER include/spdk/net.h 00:03:59.997 TEST_HEADER include/spdk/notify.h 00:03:59.997 TEST_HEADER include/spdk/nvme.h 00:03:59.997 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.997 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.997 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.997 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.997 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.997 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.997 CC examples/vmd/led/led.o 00:03:59.997 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.997 TEST_HEADER include/spdk/nvmf.h 00:03:59.997 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.997 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.997 TEST_HEADER include/spdk/opal.h 00:03:59.997 TEST_HEADER include/spdk/opal_spec.h 00:03:59.997 TEST_HEADER include/spdk/pci_ids.h 00:03:59.997 TEST_HEADER include/spdk/pipe.h 00:03:59.997 TEST_HEADER include/spdk/queue.h 00:03:59.997 TEST_HEADER include/spdk/reduce.h 00:03:59.997 TEST_HEADER include/spdk/rpc.h 00:03:59.997 TEST_HEADER include/spdk/scheduler.h 00:03:59.997 TEST_HEADER include/spdk/scsi.h 00:03:59.997 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.997 TEST_HEADER include/spdk/sock.h 00:03:59.997 TEST_HEADER include/spdk/stdinc.h 00:03:59.997 TEST_HEADER include/spdk/string.h 00:03:59.997 TEST_HEADER include/spdk/thread.h 00:03:59.997 TEST_HEADER include/spdk/trace.h 00:03:59.997 TEST_HEADER include/spdk/trace_parser.h 00:03:59.997 TEST_HEADER include/spdk/tree.h 00:03:59.997 TEST_HEADER include/spdk/ublk.h 00:03:59.997 TEST_HEADER include/spdk/util.h 00:03:59.997 TEST_HEADER include/spdk/uuid.h 00:03:59.997 TEST_HEADER include/spdk/version.h 00:03:59.997 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.997 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.997 TEST_HEADER include/spdk/vhost.h 00:03:59.997 TEST_HEADER include/spdk/vmd.h 00:03:59.997 TEST_HEADER include/spdk/xor.h 00:03:59.997 TEST_HEADER include/spdk/zipf.h 00:03:59.997 CXX test/cpp_headers/accel.o 00:03:59.997 CXX test/cpp_headers/accel_module.o 00:03:59.997 CXX test/cpp_headers/assert.o 00:03:59.997 LINK spdk_nvme 00:03:59.997 LINK lsvmd 00:03:59.997 LINK led 00:04:00.258 LINK spdk_nvme_perf 00:04:00.258 LINK spdk_nvme_identify 00:04:00.258 CXX test/cpp_headers/barrier.o 00:04:00.258 LINK nvme_fuzz 00:04:00.258 CXX test/cpp_headers/base64.o 00:04:00.258 LINK spdk_top 00:04:00.258 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:00.258 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.258 CC app/fio/bdev/fio_plugin.o 00:04:00.258 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.258 CXX test/cpp_headers/bdev.o 00:04:00.525 CXX test/cpp_headers/bdev_module.o 00:04:00.525 CC test/app/histogram_perf/histogram_perf.o 00:04:00.525 CC examples/idxd/perf/perf.o 00:04:00.525 CC test/app/jsoncat/jsoncat.o 00:04:00.525 CC test/app/stub/stub.o 00:04:00.525 CC app/vhost/vhost.o 00:04:00.525 LINK histogram_perf 00:04:00.525 LINK jsoncat 00:04:00.525 CXX test/cpp_headers/bdev_zone.o 00:04:00.525 LINK stub 00:04:00.793 LINK vhost 00:04:00.793 LINK vhost_fuzz 00:04:00.793 CXX test/cpp_headers/bit_array.o 00:04:00.793 LINK idxd_perf 00:04:00.793 LINK spdk_bdev 00:04:00.793 CC test/env/vtophys/vtophys.o 00:04:00.793 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:00.793 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.053 CXX test/cpp_headers/bit_pool.o 00:04:01.053 LINK vtophys 00:04:01.053 LINK env_dpdk_post_init 00:04:01.053 CC test/event/event_perf/event_perf.o 00:04:01.053 CC test/event/reactor/reactor.o 00:04:01.053 CC test/event/reactor_perf/reactor_perf.o 00:04:01.053 CC test/env/memory/memory_ut.o 00:04:01.053 CXX test/cpp_headers/blob_bdev.o 00:04:01.053 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.053 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:01.053 LINK reactor 00:04:01.053 LINK event_perf 00:04:01.053 LINK reactor_perf 00:04:01.312 CXX test/cpp_headers/blobfs.o 00:04:01.312 CC examples/accel/perf/accel_perf.o 00:04:01.312 CC test/event/app_repeat/app_repeat.o 00:04:01.312 LINK hello_fsdev 00:04:01.312 LINK mem_callbacks 00:04:01.312 CC test/event/scheduler/scheduler.o 00:04:01.312 CC examples/nvme/hello_world/hello_world.o 00:04:01.312 CXX test/cpp_headers/blob.o 00:04:01.312 CC examples/blob/hello_world/hello_blob.o 00:04:01.572 LINK app_repeat 00:04:01.572 CXX test/cpp_headers/conf.o 00:04:01.572 CXX test/cpp_headers/config.o 00:04:01.572 CC examples/nvme/reconnect/reconnect.o 00:04:01.572 LINK scheduler 00:04:01.572 LINK hello_world 00:04:01.572 LINK hello_blob 00:04:01.572 CXX test/cpp_headers/cpuset.o 00:04:01.832 CC test/rpc_client/rpc_client_test.o 00:04:01.832 CC test/nvme/aer/aer.o 00:04:01.832 CXX test/cpp_headers/crc16.o 00:04:01.832 CXX test/cpp_headers/crc32.o 00:04:01.832 LINK accel_perf 00:04:01.832 LINK rpc_client_test 00:04:01.832 LINK reconnect 00:04:01.832 CXX test/cpp_headers/crc64.o 00:04:01.832 CC examples/blob/cli/blobcli.o 00:04:02.091 CC test/accel/dif/dif.o 00:04:02.091 LINK aer 00:04:02.091 CXX test/cpp_headers/dif.o 00:04:02.091 LINK iscsi_fuzz 00:04:02.091 CC test/blobfs/mkfs/mkfs.o 00:04:02.091 LINK memory_ut 00:04:02.091 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:02.091 CC test/lvol/esnap/esnap.o 00:04:02.091 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.351 CXX test/cpp_headers/dma.o 00:04:02.351 LINK mkfs 00:04:02.351 CC test/nvme/reset/reset.o 00:04:02.351 CC test/nvme/sgl/sgl.o 00:04:02.351 CXX test/cpp_headers/endian.o 00:04:02.351 CC test/env/pci/pci_ut.o 00:04:02.351 LINK hello_bdev 00:04:02.351 LINK blobcli 00:04:02.610 CXX test/cpp_headers/env_dpdk.o 00:04:02.610 CC test/nvme/e2edp/nvme_dp.o 00:04:02.610 LINK reset 00:04:02.610 LINK sgl 00:04:02.610 LINK nvme_manage 00:04:02.610 CXX test/cpp_headers/env.o 00:04:02.610 CC test/nvme/overhead/overhead.o 00:04:02.610 LINK dif 00:04:02.610 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.869 CC test/nvme/err_injection/err_injection.o 00:04:02.869 LINK pci_ut 00:04:02.869 CXX test/cpp_headers/event.o 00:04:02.869 LINK nvme_dp 00:04:02.869 CC examples/nvme/arbitration/arbitration.o 00:04:02.869 CC test/nvme/startup/startup.o 00:04:02.869 LINK err_injection 00:04:02.869 CXX test/cpp_headers/fd_group.o 00:04:02.869 LINK overhead 00:04:03.129 CC examples/nvme/hotplug/hotplug.o 00:04:03.129 CC test/nvme/reserve/reserve.o 00:04:03.129 LINK startup 00:04:03.129 CXX test/cpp_headers/fd.o 00:04:03.129 CXX test/cpp_headers/file.o 00:04:03.129 CC test/nvme/simple_copy/simple_copy.o 00:04:03.129 LINK arbitration 00:04:03.129 CC test/nvme/connect_stress/connect_stress.o 00:04:03.129 LINK hotplug 00:04:03.129 CXX test/cpp_headers/fsdev.o 00:04:03.129 LINK reserve 00:04:03.388 CC test/nvme/boot_partition/boot_partition.o 00:04:03.388 LINK simple_copy 00:04:03.388 CC test/bdev/bdevio/bdevio.o 00:04:03.388 LINK connect_stress 00:04:03.388 CXX test/cpp_headers/fsdev_module.o 00:04:03.388 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.388 CC examples/nvme/abort/abort.o 00:04:03.388 LINK boot_partition 00:04:03.388 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.647 CC test/nvme/compliance/nvme_compliance.o 00:04:03.647 LINK bdevperf 00:04:03.647 CXX test/cpp_headers/ftl.o 00:04:03.647 LINK cmb_copy 00:04:03.647 LINK pmr_persistence 00:04:03.647 CC test/nvme/fused_ordering/fused_ordering.o 00:04:03.647 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.647 LINK bdevio 00:04:03.647 CXX test/cpp_headers/gpt_spec.o 00:04:03.906 CXX test/cpp_headers/hexlify.o 00:04:03.906 LINK abort 00:04:03.906 LINK fused_ordering 00:04:03.906 CC test/nvme/fdp/fdp.o 00:04:03.906 CC test/nvme/cuse/cuse.o 00:04:03.906 LINK doorbell_aers 00:04:03.906 LINK nvme_compliance 00:04:03.906 CXX test/cpp_headers/histogram_data.o 00:04:03.906 CXX test/cpp_headers/idxd.o 00:04:03.906 CXX test/cpp_headers/idxd_spec.o 00:04:03.906 CXX test/cpp_headers/init.o 00:04:03.906 CXX test/cpp_headers/ioat.o 00:04:04.166 CXX test/cpp_headers/ioat_spec.o 00:04:04.166 CXX test/cpp_headers/iscsi_spec.o 00:04:04.166 CXX test/cpp_headers/json.o 00:04:04.166 CXX test/cpp_headers/jsonrpc.o 00:04:04.166 CXX test/cpp_headers/keyring.o 00:04:04.166 CXX test/cpp_headers/keyring_module.o 00:04:04.166 CXX test/cpp_headers/likely.o 00:04:04.166 LINK fdp 00:04:04.166 CC examples/nvmf/nvmf/nvmf.o 00:04:04.166 CXX test/cpp_headers/log.o 00:04:04.166 CXX test/cpp_headers/lvol.o 00:04:04.166 CXX test/cpp_headers/md5.o 00:04:04.425 CXX test/cpp_headers/memory.o 00:04:04.425 CXX test/cpp_headers/mmio.o 00:04:04.425 CXX test/cpp_headers/nbd.o 00:04:04.425 CXX test/cpp_headers/net.o 00:04:04.425 CXX test/cpp_headers/notify.o 00:04:04.425 CXX test/cpp_headers/nvme.o 00:04:04.425 CXX test/cpp_headers/nvme_intel.o 00:04:04.425 CXX test/cpp_headers/nvme_ocssd.o 00:04:04.425 LINK nvmf 00:04:04.425 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:04.425 CXX test/cpp_headers/nvme_spec.o 00:04:04.425 CXX test/cpp_headers/nvme_zns.o 00:04:04.425 CXX test/cpp_headers/nvmf_cmd.o 00:04:04.686 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:04.686 CXX test/cpp_headers/nvmf.o 00:04:04.686 CXX test/cpp_headers/nvmf_spec.o 00:04:04.686 CXX test/cpp_headers/nvmf_transport.o 00:04:04.686 CXX test/cpp_headers/opal.o 00:04:04.686 CXX test/cpp_headers/opal_spec.o 00:04:04.686 CXX test/cpp_headers/pci_ids.o 00:04:04.686 CXX test/cpp_headers/pipe.o 00:04:04.686 CXX test/cpp_headers/queue.o 00:04:04.686 CXX test/cpp_headers/reduce.o 00:04:04.686 CXX test/cpp_headers/rpc.o 00:04:04.686 CXX test/cpp_headers/scheduler.o 00:04:04.686 CXX test/cpp_headers/scsi.o 00:04:04.946 CXX test/cpp_headers/scsi_spec.o 00:04:04.946 CXX test/cpp_headers/sock.o 00:04:04.946 CXX test/cpp_headers/stdinc.o 00:04:04.946 CXX test/cpp_headers/string.o 00:04:04.946 CXX test/cpp_headers/thread.o 00:04:04.946 CXX test/cpp_headers/trace.o 00:04:04.946 CXX test/cpp_headers/trace_parser.o 00:04:04.946 CXX test/cpp_headers/tree.o 00:04:04.946 CXX test/cpp_headers/ublk.o 00:04:04.946 CXX test/cpp_headers/util.o 00:04:04.946 CXX test/cpp_headers/uuid.o 00:04:04.946 CXX test/cpp_headers/version.o 00:04:04.946 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.946 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.946 CXX test/cpp_headers/vhost.o 00:04:04.946 CXX test/cpp_headers/vmd.o 00:04:04.946 CXX test/cpp_headers/xor.o 00:04:05.206 CXX test/cpp_headers/zipf.o 00:04:05.206 LINK cuse 00:04:07.743 LINK esnap 00:04:08.002 00:04:08.002 real 1m22.651s 00:04:08.002 user 7m19.218s 00:04:08.002 sys 1m28.068s 00:04:08.002 05:42:15 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:08.002 05:42:15 make -- common/autotest_common.sh@10 -- $ set +x 00:04:08.002 ************************************ 00:04:08.002 END TEST make 00:04:08.002 ************************************ 00:04:08.002 05:42:15 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:08.002 05:42:15 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:08.002 05:42:15 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:08.002 05:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.002 05:42:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:08.002 05:42:15 -- pm/common@44 -- $ pid=5455 00:04:08.002 05:42:15 -- pm/common@50 -- $ kill -TERM 5455 00:04:08.002 05:42:15 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.002 05:42:15 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:08.002 05:42:15 -- pm/common@44 -- $ pid=5457 00:04:08.002 05:42:15 -- pm/common@50 -- $ kill -TERM 5457 00:04:08.002 05:42:15 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:08.003 05:42:15 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:08.003 05:42:15 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.003 05:42:15 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.003 05:42:15 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.263 05:42:15 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.263 05:42:15 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.263 05:42:15 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.263 05:42:15 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.263 05:42:15 -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.263 05:42:15 -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.263 05:42:15 -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.263 05:42:15 -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.263 05:42:15 -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.263 05:42:15 -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.263 05:42:15 -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.263 05:42:15 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.263 05:42:15 -- scripts/common.sh@344 -- # case "$op" in 00:04:08.263 05:42:15 -- scripts/common.sh@345 -- # : 1 00:04:08.263 05:42:15 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.263 05:42:15 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.263 05:42:15 -- scripts/common.sh@365 -- # decimal 1 00:04:08.263 05:42:15 -- scripts/common.sh@353 -- # local d=1 00:04:08.264 05:42:15 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.264 05:42:15 -- scripts/common.sh@355 -- # echo 1 00:04:08.264 05:42:15 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.264 05:42:15 -- scripts/common.sh@366 -- # decimal 2 00:04:08.264 05:42:15 -- scripts/common.sh@353 -- # local d=2 00:04:08.264 05:42:15 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.264 05:42:15 -- scripts/common.sh@355 -- # echo 2 00:04:08.264 05:42:15 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.264 05:42:15 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.264 05:42:15 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.264 05:42:15 -- scripts/common.sh@368 -- # return 0 00:04:08.264 05:42:15 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.264 05:42:15 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.264 --rc genhtml_branch_coverage=1 00:04:08.264 --rc genhtml_function_coverage=1 00:04:08.264 --rc genhtml_legend=1 00:04:08.264 --rc geninfo_all_blocks=1 00:04:08.264 --rc geninfo_unexecuted_blocks=1 00:04:08.264 00:04:08.264 ' 00:04:08.264 05:42:15 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.264 --rc genhtml_branch_coverage=1 00:04:08.264 --rc genhtml_function_coverage=1 00:04:08.264 --rc genhtml_legend=1 00:04:08.264 --rc geninfo_all_blocks=1 00:04:08.264 --rc geninfo_unexecuted_blocks=1 00:04:08.264 00:04:08.264 ' 00:04:08.264 05:42:15 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.264 --rc genhtml_branch_coverage=1 00:04:08.264 --rc genhtml_function_coverage=1 00:04:08.264 --rc genhtml_legend=1 00:04:08.264 --rc geninfo_all_blocks=1 00:04:08.264 --rc geninfo_unexecuted_blocks=1 00:04:08.264 00:04:08.264 ' 00:04:08.264 05:42:15 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.264 --rc genhtml_branch_coverage=1 00:04:08.264 --rc genhtml_function_coverage=1 00:04:08.264 --rc genhtml_legend=1 00:04:08.264 --rc geninfo_all_blocks=1 00:04:08.264 --rc geninfo_unexecuted_blocks=1 00:04:08.264 00:04:08.264 ' 00:04:08.264 05:42:15 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:08.264 05:42:15 -- nvmf/common.sh@7 -- # uname -s 00:04:08.264 05:42:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:08.264 05:42:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:08.264 05:42:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:08.264 05:42:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:08.264 05:42:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:08.264 05:42:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:08.264 05:42:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:08.264 05:42:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:08.264 05:42:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:08.264 05:42:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:08.264 05:42:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dedb147-6356-40e2-9718-b1cf30e7de80 00:04:08.264 05:42:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=1dedb147-6356-40e2-9718-b1cf30e7de80 00:04:08.264 05:42:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:08.264 05:42:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:08.264 05:42:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:08.264 05:42:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:08.264 05:42:15 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:08.264 05:42:15 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:08.264 05:42:15 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:08.264 05:42:15 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:08.264 05:42:15 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:08.264 05:42:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.264 05:42:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.264 05:42:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.264 05:42:15 -- paths/export.sh@5 -- # export PATH 00:04:08.264 05:42:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:08.264 05:42:15 -- nvmf/common.sh@51 -- # : 0 00:04:08.264 05:42:15 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:08.264 05:42:15 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:08.264 05:42:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:08.264 05:42:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:08.264 05:42:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:08.264 05:42:15 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:08.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:08.264 05:42:15 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:08.264 05:42:15 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:08.264 05:42:15 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:08.264 05:42:15 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.264 05:42:15 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.264 05:42:15 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.264 05:42:15 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.264 05:42:15 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.264 05:42:15 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.264 05:42:15 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.264 05:42:15 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.264 05:42:15 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.264 05:42:15 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.264 05:42:15 -- spdk/autotest.sh@48 -- # udevadm_pid=55598 00:04:08.264 05:42:15 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.264 05:42:15 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.264 05:42:15 -- pm/common@17 -- # local monitor 00:04:08.264 05:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.264 05:42:15 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.264 05:42:15 -- pm/common@25 -- # sleep 1 00:04:08.264 05:42:15 -- pm/common@21 -- # date +%s 00:04:08.264 05:42:15 -- pm/common@21 -- # date +%s 00:04:08.264 05:42:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733982135 00:04:08.264 05:42:15 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733982135 00:04:08.264 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733982135_collect-vmstat.pm.log 00:04:08.264 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733982135_collect-cpu-load.pm.log 00:04:09.644 05:42:16 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.644 05:42:16 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.644 05:42:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:09.644 05:42:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.644 05:42:16 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.644 05:42:16 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:09.644 05:42:16 -- common/autotest_common.sh@10 -- # set +x 00:04:09.644 05:42:16 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:09.644 05:42:16 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:09.644 05:42:16 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:09.644 05:42:16 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.644 05:42:16 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:09.644 05:42:16 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.644 05:42:16 -- common/autotest_common.sh@1457 -- # uname 00:04:09.644 05:42:16 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:09.644 05:42:16 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.644 05:42:16 -- common/autotest_common.sh@1477 -- # uname 00:04:09.644 05:42:16 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:09.644 05:42:16 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:09.644 05:42:16 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.644 lcov: LCOV version 1.15 00:04:09.644 05:42:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:24.543 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:24.543 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:36.763 05:42:43 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:36.763 05:42:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:36.763 05:42:43 -- common/autotest_common.sh@10 -- # set +x 00:04:36.763 05:42:43 -- spdk/autotest.sh@78 -- # rm -f 00:04:36.763 05:42:43 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:37.022 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.022 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:37.022 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:37.022 05:42:44 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:37.022 05:42:44 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:37.022 05:42:44 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:37.022 05:42:44 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:37.022 05:42:44 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:37.022 05:42:44 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:37.022 05:42:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:04:37.022 05:42:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:37.022 05:42:44 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:37.022 05:42:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:04:37.022 05:42:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:04:37.022 05:42:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:37.022 05:42:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:04:37.022 05:42:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:04:37.022 05:42:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:37.022 05:42:44 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:04:37.022 05:42:44 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:04:37.022 05:42:44 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:37.022 05:42:44 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:37.022 05:42:44 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:37.022 05:42:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.022 05:42:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.022 05:42:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:37.022 05:42:44 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:37.022 05:42:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:37.282 No valid GPT data, bailing 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # pt= 00:04:37.282 05:42:44 -- scripts/common.sh@395 -- # return 1 00:04:37.282 05:42:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:37.282 1+0 records in 00:04:37.282 1+0 records out 00:04:37.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00691182 s, 152 MB/s 00:04:37.282 05:42:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.282 05:42:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.282 05:42:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:37.282 05:42:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:37.282 05:42:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:37.282 No valid GPT data, bailing 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # pt= 00:04:37.282 05:42:44 -- scripts/common.sh@395 -- # return 1 00:04:37.282 05:42:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:37.282 1+0 records in 00:04:37.282 1+0 records out 00:04:37.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00647478 s, 162 MB/s 00:04:37.282 05:42:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.282 05:42:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.282 05:42:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:37.282 05:42:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:37.282 05:42:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:37.282 No valid GPT data, bailing 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:37.282 05:42:44 -- scripts/common.sh@394 -- # pt= 00:04:37.282 05:42:44 -- scripts/common.sh@395 -- # return 1 00:04:37.282 05:42:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:37.282 1+0 records in 00:04:37.282 1+0 records out 00:04:37.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632919 s, 166 MB/s 00:04:37.282 05:42:44 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:37.282 05:42:44 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:37.282 05:42:44 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:37.282 05:42:44 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:37.282 05:42:44 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:37.541 No valid GPT data, bailing 00:04:37.541 05:42:44 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:37.541 05:42:44 -- scripts/common.sh@394 -- # pt= 00:04:37.541 05:42:44 -- scripts/common.sh@395 -- # return 1 00:04:37.541 05:42:44 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:37.541 1+0 records in 00:04:37.541 1+0 records out 00:04:37.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449572 s, 233 MB/s 00:04:37.541 05:42:44 -- spdk/autotest.sh@105 -- # sync 00:04:37.799 05:42:45 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:37.799 05:42:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:37.799 05:42:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.091 05:42:48 -- spdk/autotest.sh@111 -- # uname -s 00:04:41.091 05:42:48 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:41.091 05:42:48 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:41.091 05:42:48 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:41.351 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.351 Hugepages 00:04:41.351 node hugesize free / total 00:04:41.351 node0 1048576kB 0 / 0 00:04:41.351 node0 2048kB 0 / 0 00:04:41.351 00:04:41.351 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:41.610 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:41.610 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:41.870 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:41.870 05:42:49 -- spdk/autotest.sh@117 -- # uname -s 00:04:41.870 05:42:49 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:41.870 05:42:49 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:41.870 05:42:49 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:42.438 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.697 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.697 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:42.697 05:42:50 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:44.077 05:42:51 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:44.077 05:42:51 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:44.077 05:42:51 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:44.077 05:42:51 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:44.077 05:42:51 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:44.077 05:42:51 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:44.077 05:42:51 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:44.077 05:42:51 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:44.077 05:42:51 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:44.077 05:42:51 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:44.077 05:42:51 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:44.077 05:42:51 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.337 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.337 Waiting for block devices as requested 00:04:44.337 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.597 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.597 05:42:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.597 05:42:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:44.597 05:42:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:44.597 05:42:51 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:44.597 05:42:51 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:44.597 05:42:51 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.597 05:42:51 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.597 05:42:51 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.597 05:42:51 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.597 05:42:51 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.597 05:42:51 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.597 05:42:51 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.597 05:42:51 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.597 05:42:51 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.597 05:42:51 -- common/autotest_common.sh@1543 -- # continue 00:04:44.597 05:42:51 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.597 05:42:51 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:44.597 05:42:51 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:44.597 05:42:51 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:44.597 05:42:52 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:44.597 05:42:52 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:44.597 05:42:52 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.597 05:42:52 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.597 05:42:52 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:44.597 05:42:52 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.597 05:42:52 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.597 05:42:52 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:44.597 05:42:52 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.597 05:42:52 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.597 05:42:52 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.597 05:42:52 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.597 05:42:52 -- common/autotest_common.sh@1543 -- # continue 00:04:44.597 05:42:52 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:44.597 05:42:52 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.597 05:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.597 05:42:52 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:44.597 05:42:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.597 05:42:52 -- common/autotest_common.sh@10 -- # set +x 00:04:44.597 05:42:52 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.536 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.796 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.796 05:42:53 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:45.796 05:42:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.796 05:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.796 05:42:53 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:45.796 05:42:53 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:45.796 05:42:53 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.796 05:42:53 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:45.796 05:42:53 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:45.796 05:42:53 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:45.796 05:42:53 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:45.796 05:42:53 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:45.796 05:42:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:45.796 05:42:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:45.796 05:42:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.796 05:42:53 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.796 05:42:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:45.796 05:42:53 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:04:45.796 05:42:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.796 05:42:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.796 05:42:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:45.796 05:42:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.796 05:42:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.796 05:42:53 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:45.796 05:42:53 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:45.796 05:42:53 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:45.796 05:42:53 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.796 05:42:53 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:45.796 05:42:53 -- common/autotest_common.sh@1572 -- # return 0 00:04:45.796 05:42:53 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:45.796 05:42:53 -- common/autotest_common.sh@1580 -- # return 0 00:04:45.796 05:42:53 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:45.796 05:42:53 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:45.796 05:42:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.796 05:42:53 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:45.796 05:42:53 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:45.796 05:42:53 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.796 05:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:45.796 05:42:53 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:45.796 05:42:53 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.796 05:42:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.796 05:42:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.796 05:42:53 -- common/autotest_common.sh@10 -- # set +x 00:04:46.056 ************************************ 00:04:46.056 START TEST env 00:04:46.056 ************************************ 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:46.056 * Looking for test storage... 00:04:46.056 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:46.056 05:42:53 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:46.056 05:42:53 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:46.056 05:42:53 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:46.056 05:42:53 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:46.056 05:42:53 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:46.056 05:42:53 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:46.056 05:42:53 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:46.056 05:42:53 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:46.056 05:42:53 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:46.056 05:42:53 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:46.056 05:42:53 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:46.056 05:42:53 env -- scripts/common.sh@344 -- # case "$op" in 00:04:46.056 05:42:53 env -- scripts/common.sh@345 -- # : 1 00:04:46.056 05:42:53 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:46.056 05:42:53 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:46.056 05:42:53 env -- scripts/common.sh@365 -- # decimal 1 00:04:46.056 05:42:53 env -- scripts/common.sh@353 -- # local d=1 00:04:46.056 05:42:53 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:46.056 05:42:53 env -- scripts/common.sh@355 -- # echo 1 00:04:46.056 05:42:53 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:46.056 05:42:53 env -- scripts/common.sh@366 -- # decimal 2 00:04:46.056 05:42:53 env -- scripts/common.sh@353 -- # local d=2 00:04:46.056 05:42:53 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:46.056 05:42:53 env -- scripts/common.sh@355 -- # echo 2 00:04:46.056 05:42:53 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:46.056 05:42:53 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:46.056 05:42:53 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:46.056 05:42:53 env -- scripts/common.sh@368 -- # return 0 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:46.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.056 --rc genhtml_branch_coverage=1 00:04:46.056 --rc genhtml_function_coverage=1 00:04:46.056 --rc genhtml_legend=1 00:04:46.056 --rc geninfo_all_blocks=1 00:04:46.056 --rc geninfo_unexecuted_blocks=1 00:04:46.056 00:04:46.056 ' 00:04:46.056 05:42:53 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:46.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.057 --rc genhtml_branch_coverage=1 00:04:46.057 --rc genhtml_function_coverage=1 00:04:46.057 --rc genhtml_legend=1 00:04:46.057 --rc geninfo_all_blocks=1 00:04:46.057 --rc geninfo_unexecuted_blocks=1 00:04:46.057 00:04:46.057 ' 00:04:46.057 05:42:53 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:46.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.057 --rc genhtml_branch_coverage=1 00:04:46.057 --rc genhtml_function_coverage=1 00:04:46.057 --rc genhtml_legend=1 00:04:46.057 --rc geninfo_all_blocks=1 00:04:46.057 --rc geninfo_unexecuted_blocks=1 00:04:46.057 00:04:46.057 ' 00:04:46.057 05:42:53 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:46.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:46.057 --rc genhtml_branch_coverage=1 00:04:46.057 --rc genhtml_function_coverage=1 00:04:46.057 --rc genhtml_legend=1 00:04:46.057 --rc geninfo_all_blocks=1 00:04:46.057 --rc geninfo_unexecuted_blocks=1 00:04:46.057 00:04:46.057 ' 00:04:46.057 05:42:53 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.057 05:42:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.057 05:42:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.057 05:42:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.057 ************************************ 00:04:46.057 START TEST env_memory 00:04:46.057 ************************************ 00:04:46.057 05:42:53 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:46.324 00:04:46.324 00:04:46.324 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.324 http://cunit.sourceforge.net/ 00:04:46.324 00:04:46.324 00:04:46.324 Suite: memory 00:04:46.324 Test: alloc and free memory map ...[2024-12-12 05:42:53.628135] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:46.324 passed 00:04:46.324 Test: mem map translation ...[2024-12-12 05:42:53.669988] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.324 [2024-12-12 05:42:53.670026] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.324 [2024-12-12 05:42:53.670079] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.324 [2024-12-12 05:42:53.670097] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.324 passed 00:04:46.324 Test: mem map registration ...[2024-12-12 05:42:53.732951] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:46.324 [2024-12-12 05:42:53.732989] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:46.324 passed 00:04:46.324 Test: mem map adjacent registrations ...passed 00:04:46.324 00:04:46.324 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.324 suites 1 1 n/a 0 0 00:04:46.324 tests 4 4 4 0 0 00:04:46.324 asserts 152 152 152 0 n/a 00:04:46.324 00:04:46.324 Elapsed time = 0.229 seconds 00:04:46.597 00:04:46.597 real 0m0.281s 00:04:46.597 user 0m0.243s 00:04:46.597 sys 0m0.027s 00:04:46.597 05:42:53 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:46.597 05:42:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:46.597 ************************************ 00:04:46.597 END TEST env_memory 00:04:46.597 ************************************ 00:04:46.597 05:42:53 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.597 05:42:53 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:46.597 05:42:53 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:46.597 05:42:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:46.597 ************************************ 00:04:46.597 START TEST env_vtophys 00:04:46.597 ************************************ 00:04:46.597 05:42:53 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.597 EAL: lib.eal log level changed from notice to debug 00:04:46.597 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.597 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.597 EAL: Maximum logical cores by configuration: 128 00:04:46.597 EAL: Detected CPU lcores: 10 00:04:46.597 EAL: Detected NUMA nodes: 1 00:04:46.597 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:46.597 EAL: Detected shared linkage of DPDK 00:04:46.597 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.597 EAL: Selected IOVA mode 'PA' 00:04:46.597 EAL: Probing VFIO support... 00:04:46.597 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.597 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.597 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.597 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.597 EAL: Setting up physically contiguous memory... 00:04:46.597 EAL: Setting maximum number of open files to 524288 00:04:46.597 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.597 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.598 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.598 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.598 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.598 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.598 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.598 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.598 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.598 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.598 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.598 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.598 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.598 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.598 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.598 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.598 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.598 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.598 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.598 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.598 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.598 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.598 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.598 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.598 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.598 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.598 EAL: Hugepages will be freed exactly as allocated. 00:04:46.598 EAL: No shared files mode enabled, IPC is disabled 00:04:46.598 EAL: No shared files mode enabled, IPC is disabled 00:04:46.598 EAL: TSC frequency is ~2290000 KHz 00:04:46.598 EAL: Main lcore 0 is ready (tid=7ff23e734a40;cpuset=[0]) 00:04:46.598 EAL: Trying to obtain current memory policy. 00:04:46.598 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.598 EAL: Restoring previous memory policy: 0 00:04:46.598 EAL: request: mp_malloc_sync 00:04:46.598 EAL: No shared files mode enabled, IPC is disabled 00:04:46.598 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.598 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.598 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.598 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.598 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:46.858 00:04:46.858 00:04:46.858 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.858 http://cunit.sourceforge.net/ 00:04:46.858 00:04:46.858 00:04:46.858 Suite: components_suite 00:04:47.117 Test: vtophys_malloc_test ...passed 00:04:47.117 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:47.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.117 EAL: Restoring previous memory policy: 4 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was expanded by 4MB 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was shrunk by 4MB 00:04:47.117 EAL: Trying to obtain current memory policy. 00:04:47.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.117 EAL: Restoring previous memory policy: 4 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was expanded by 6MB 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was shrunk by 6MB 00:04:47.117 EAL: Trying to obtain current memory policy. 00:04:47.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.117 EAL: Restoring previous memory policy: 4 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was expanded by 10MB 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was shrunk by 10MB 00:04:47.117 EAL: Trying to obtain current memory policy. 00:04:47.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.117 EAL: Restoring previous memory policy: 4 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was expanded by 18MB 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was shrunk by 18MB 00:04:47.117 EAL: Trying to obtain current memory policy. 00:04:47.117 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.117 EAL: Restoring previous memory policy: 4 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was expanded by 34MB 00:04:47.117 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.117 EAL: request: mp_malloc_sync 00:04:47.117 EAL: No shared files mode enabled, IPC is disabled 00:04:47.117 EAL: Heap on socket 0 was shrunk by 34MB 00:04:47.377 EAL: Trying to obtain current memory policy. 00:04:47.377 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.377 EAL: Restoring previous memory policy: 4 00:04:47.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.377 EAL: request: mp_malloc_sync 00:04:47.377 EAL: No shared files mode enabled, IPC is disabled 00:04:47.377 EAL: Heap on socket 0 was expanded by 66MB 00:04:47.377 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.377 EAL: request: mp_malloc_sync 00:04:47.377 EAL: No shared files mode enabled, IPC is disabled 00:04:47.377 EAL: Heap on socket 0 was shrunk by 66MB 00:04:47.636 EAL: Trying to obtain current memory policy. 00:04:47.636 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.636 EAL: Restoring previous memory policy: 4 00:04:47.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.636 EAL: request: mp_malloc_sync 00:04:47.636 EAL: No shared files mode enabled, IPC is disabled 00:04:47.636 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.636 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.895 EAL: request: mp_malloc_sync 00:04:47.895 EAL: No shared files mode enabled, IPC is disabled 00:04:47.895 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.895 EAL: Trying to obtain current memory policy. 00:04:47.895 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.895 EAL: Restoring previous memory policy: 4 00:04:47.895 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.895 EAL: request: mp_malloc_sync 00:04:47.895 EAL: No shared files mode enabled, IPC is disabled 00:04:47.895 EAL: Heap on socket 0 was expanded by 258MB 00:04:48.463 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.463 EAL: request: mp_malloc_sync 00:04:48.463 EAL: No shared files mode enabled, IPC is disabled 00:04:48.463 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.032 EAL: Trying to obtain current memory policy. 00:04:49.032 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.032 EAL: Restoring previous memory policy: 4 00:04:49.032 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.032 EAL: request: mp_malloc_sync 00:04:49.032 EAL: No shared files mode enabled, IPC is disabled 00:04:49.032 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.969 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.969 EAL: request: mp_malloc_sync 00:04:49.969 EAL: No shared files mode enabled, IPC is disabled 00:04:49.969 EAL: Heap on socket 0 was shrunk by 514MB 00:04:50.907 EAL: Trying to obtain current memory policy. 00:04:50.907 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.907 EAL: Restoring previous memory policy: 4 00:04:50.907 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.907 EAL: request: mp_malloc_sync 00:04:50.907 EAL: No shared files mode enabled, IPC is disabled 00:04:50.907 EAL: Heap on socket 0 was expanded by 1026MB 00:04:52.812 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.812 EAL: request: mp_malloc_sync 00:04:52.812 EAL: No shared files mode enabled, IPC is disabled 00:04:52.812 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.189 passed 00:04:54.189 00:04:54.189 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.189 suites 1 1 n/a 0 0 00:04:54.189 tests 2 2 2 0 0 00:04:54.189 asserts 5775 5775 5775 0 n/a 00:04:54.189 00:04:54.189 Elapsed time = 7.504 seconds 00:04:54.189 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.189 EAL: request: mp_malloc_sync 00:04:54.189 EAL: No shared files mode enabled, IPC is disabled 00:04:54.189 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.189 EAL: No shared files mode enabled, IPC is disabled 00:04:54.189 EAL: No shared files mode enabled, IPC is disabled 00:04:54.189 EAL: No shared files mode enabled, IPC is disabled 00:04:54.448 00:04:54.448 real 0m7.821s 00:04:54.448 user 0m6.894s 00:04:54.448 sys 0m0.775s 00:04:54.448 05:43:01 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.448 05:43:01 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.448 ************************************ 00:04:54.448 END TEST env_vtophys 00:04:54.448 ************************************ 00:04:54.448 05:43:01 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.448 05:43:01 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.448 05:43:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.448 05:43:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.448 ************************************ 00:04:54.448 START TEST env_pci 00:04:54.448 ************************************ 00:04:54.448 05:43:01 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.448 00:04:54.448 00:04:54.448 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.448 http://cunit.sourceforge.net/ 00:04:54.448 00:04:54.448 00:04:54.448 Suite: pci 00:04:54.448 Test: pci_hook ...[2024-12-12 05:43:01.832244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57860 has claimed it 00:04:54.448 passed 00:04:54.448 00:04:54.448 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.448 suites 1 1 n/a 0 0 00:04:54.448 tests 1 1 1 0 0 00:04:54.448 asserts 25 25 25 0 n/a 00:04:54.448 00:04:54.448 Elapsed time = 0.005 seconds 00:04:54.448 EAL: Cannot find device (10000:00:01.0) 00:04:54.448 EAL: Failed to attach device on primary process 00:04:54.448 00:04:54.448 real 0m0.108s 00:04:54.448 user 0m0.040s 00:04:54.448 sys 0m0.066s 00:04:54.448 05:43:01 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.448 05:43:01 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.448 ************************************ 00:04:54.448 END TEST env_pci 00:04:54.448 ************************************ 00:04:54.448 05:43:01 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.448 05:43:01 env -- env/env.sh@15 -- # uname 00:04:54.448 05:43:01 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.448 05:43:01 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.448 05:43:01 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.448 05:43:01 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:54.448 05:43:01 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.448 05:43:01 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.707 ************************************ 00:04:54.707 START TEST env_dpdk_post_init 00:04:54.707 ************************************ 00:04:54.707 05:43:01 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.707 EAL: Detected CPU lcores: 10 00:04:54.707 EAL: Detected NUMA nodes: 1 00:04:54.707 EAL: Detected shared linkage of DPDK 00:04:54.707 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.707 EAL: Selected IOVA mode 'PA' 00:04:54.707 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.707 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:54.707 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:55.002 Starting DPDK initialization... 00:04:55.002 Starting SPDK post initialization... 00:04:55.002 SPDK NVMe probe 00:04:55.002 Attaching to 0000:00:10.0 00:04:55.002 Attaching to 0000:00:11.0 00:04:55.002 Attached to 0000:00:10.0 00:04:55.002 Attached to 0000:00:11.0 00:04:55.002 Cleaning up... 00:04:55.002 00:04:55.002 real 0m0.278s 00:04:55.002 user 0m0.088s 00:04:55.002 sys 0m0.090s 00:04:55.002 05:43:02 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.002 05:43:02 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.002 ************************************ 00:04:55.002 END TEST env_dpdk_post_init 00:04:55.002 ************************************ 00:04:55.002 05:43:02 env -- env/env.sh@26 -- # uname 00:04:55.002 05:43:02 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.002 05:43:02 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.002 05:43:02 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.002 05:43:02 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.002 05:43:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.002 ************************************ 00:04:55.002 START TEST env_mem_callbacks 00:04:55.002 ************************************ 00:04:55.002 05:43:02 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.002 EAL: Detected CPU lcores: 10 00:04:55.002 EAL: Detected NUMA nodes: 1 00:04:55.002 EAL: Detected shared linkage of DPDK 00:04:55.002 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.002 EAL: Selected IOVA mode 'PA' 00:04:55.002 00:04:55.002 00:04:55.002 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.002 http://cunit.sourceforge.net/ 00:04:55.002 00:04:55.002 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.002 00:04:55.002 Suite: memory 00:04:55.002 Test: test ... 00:04:55.002 register 0x200000200000 2097152 00:04:55.002 malloc 3145728 00:04:55.002 register 0x200000400000 4194304 00:04:55.002 buf 0x2000004fffc0 len 3145728 PASSED 00:04:55.002 malloc 64 00:04:55.002 buf 0x2000004ffec0 len 64 PASSED 00:04:55.002 malloc 4194304 00:04:55.002 register 0x200000800000 6291456 00:04:55.278 buf 0x2000009fffc0 len 4194304 PASSED 00:04:55.278 free 0x2000004fffc0 3145728 00:04:55.278 free 0x2000004ffec0 64 00:04:55.278 unregister 0x200000400000 4194304 PASSED 00:04:55.278 free 0x2000009fffc0 4194304 00:04:55.278 unregister 0x200000800000 6291456 PASSED 00:04:55.278 malloc 8388608 00:04:55.278 register 0x200000400000 10485760 00:04:55.278 buf 0x2000005fffc0 len 8388608 PASSED 00:04:55.278 free 0x2000005fffc0 8388608 00:04:55.278 unregister 0x200000400000 10485760 PASSED 00:04:55.278 passed 00:04:55.278 00:04:55.278 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.278 suites 1 1 n/a 0 0 00:04:55.278 tests 1 1 1 0 0 00:04:55.278 asserts 15 15 15 0 n/a 00:04:55.278 00:04:55.278 Elapsed time = 0.079 seconds 00:04:55.278 00:04:55.278 real 0m0.271s 00:04:55.278 user 0m0.104s 00:04:55.278 sys 0m0.065s 00:04:55.278 05:43:02 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.278 05:43:02 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 ************************************ 00:04:55.278 END TEST env_mem_callbacks 00:04:55.278 ************************************ 00:04:55.278 00:04:55.278 real 0m9.321s 00:04:55.278 user 0m7.588s 00:04:55.278 sys 0m1.383s 00:04:55.278 05:43:02 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.278 05:43:02 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 ************************************ 00:04:55.278 END TEST env 00:04:55.278 ************************************ 00:04:55.278 05:43:02 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.278 05:43:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.278 05:43:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.278 05:43:02 -- common/autotest_common.sh@10 -- # set +x 00:04:55.278 ************************************ 00:04:55.278 START TEST rpc 00:04:55.278 ************************************ 00:04:55.278 05:43:02 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.538 * Looking for test storage... 00:04:55.538 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.538 05:43:02 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.538 05:43:02 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.538 05:43:02 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.538 05:43:02 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.538 05:43:02 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.538 05:43:02 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.538 05:43:02 rpc -- scripts/common.sh@345 -- # : 1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.538 05:43:02 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.538 05:43:02 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.538 05:43:02 rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.538 05:43:02 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.538 05:43:02 rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.538 05:43:02 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.538 05:43:02 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.538 05:43:02 rpc -- scripts/common.sh@368 -- # return 0 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.538 --rc genhtml_branch_coverage=1 00:04:55.538 --rc genhtml_function_coverage=1 00:04:55.538 --rc genhtml_legend=1 00:04:55.538 --rc geninfo_all_blocks=1 00:04:55.538 --rc geninfo_unexecuted_blocks=1 00:04:55.538 00:04:55.538 ' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.538 --rc genhtml_branch_coverage=1 00:04:55.538 --rc genhtml_function_coverage=1 00:04:55.538 --rc genhtml_legend=1 00:04:55.538 --rc geninfo_all_blocks=1 00:04:55.538 --rc geninfo_unexecuted_blocks=1 00:04:55.538 00:04:55.538 ' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.538 --rc genhtml_branch_coverage=1 00:04:55.538 --rc genhtml_function_coverage=1 00:04:55.538 --rc genhtml_legend=1 00:04:55.538 --rc geninfo_all_blocks=1 00:04:55.538 --rc geninfo_unexecuted_blocks=1 00:04:55.538 00:04:55.538 ' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.538 --rc genhtml_branch_coverage=1 00:04:55.538 --rc genhtml_function_coverage=1 00:04:55.538 --rc genhtml_legend=1 00:04:55.538 --rc geninfo_all_blocks=1 00:04:55.538 --rc geninfo_unexecuted_blocks=1 00:04:55.538 00:04:55.538 ' 00:04:55.538 05:43:02 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57987 00:04:55.538 05:43:02 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:55.538 05:43:02 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.538 05:43:02 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57987 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@835 -- # '[' -z 57987 ']' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.538 05:43:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.538 [2024-12-12 05:43:03.038234] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:04:55.538 [2024-12-12 05:43:03.038353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57987 ] 00:04:55.798 [2024-12-12 05:43:03.209993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.798 [2024-12-12 05:43:03.316649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.798 [2024-12-12 05:43:03.316711] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57987' to capture a snapshot of events at runtime. 00:04:55.798 [2024-12-12 05:43:03.316720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.798 [2024-12-12 05:43:03.316745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.798 [2024-12-12 05:43:03.316752] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57987 for offline analysis/debug. 00:04:55.798 [2024-12-12 05:43:03.317900] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.737 05:43:04 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.737 05:43:04 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:56.737 05:43:04 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.737 05:43:04 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.737 05:43:04 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.737 05:43:04 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.737 05:43:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.737 05:43:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.737 05:43:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 ************************************ 00:04:56.737 START TEST rpc_integrity 00:04:56.737 ************************************ 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.737 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.737 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.737 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.737 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.737 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.737 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.998 { 00:04:56.998 "name": "Malloc0", 00:04:56.998 "aliases": [ 00:04:56.998 "d0466a15-65c9-47cd-b026-a2a680f615dc" 00:04:56.998 ], 00:04:56.998 "product_name": "Malloc disk", 00:04:56.998 "block_size": 512, 00:04:56.998 "num_blocks": 16384, 00:04:56.998 "uuid": "d0466a15-65c9-47cd-b026-a2a680f615dc", 00:04:56.998 "assigned_rate_limits": { 00:04:56.998 "rw_ios_per_sec": 0, 00:04:56.998 "rw_mbytes_per_sec": 0, 00:04:56.998 "r_mbytes_per_sec": 0, 00:04:56.998 "w_mbytes_per_sec": 0 00:04:56.998 }, 00:04:56.998 "claimed": false, 00:04:56.998 "zoned": false, 00:04:56.998 "supported_io_types": { 00:04:56.998 "read": true, 00:04:56.998 "write": true, 00:04:56.998 "unmap": true, 00:04:56.998 "flush": true, 00:04:56.998 "reset": true, 00:04:56.998 "nvme_admin": false, 00:04:56.998 "nvme_io": false, 00:04:56.998 "nvme_io_md": false, 00:04:56.998 "write_zeroes": true, 00:04:56.998 "zcopy": true, 00:04:56.998 "get_zone_info": false, 00:04:56.998 "zone_management": false, 00:04:56.998 "zone_append": false, 00:04:56.998 "compare": false, 00:04:56.998 "compare_and_write": false, 00:04:56.998 "abort": true, 00:04:56.998 "seek_hole": false, 00:04:56.998 "seek_data": false, 00:04:56.998 "copy": true, 00:04:56.998 "nvme_iov_md": false 00:04:56.998 }, 00:04:56.998 "memory_domains": [ 00:04:56.998 { 00:04:56.998 "dma_device_id": "system", 00:04:56.998 "dma_device_type": 1 00:04:56.998 }, 00:04:56.998 { 00:04:56.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.998 "dma_device_type": 2 00:04:56.998 } 00:04:56.998 ], 00:04:56.998 "driver_specific": {} 00:04:56.998 } 00:04:56.998 ]' 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 [2024-12-12 05:43:04.340042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.998 [2024-12-12 05:43:04.340107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.998 [2024-12-12 05:43:04.340147] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:56.998 [2024-12-12 05:43:04.340168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.998 [2024-12-12 05:43:04.342356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.998 [2024-12-12 05:43:04.342400] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.998 Passthru0 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.998 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.998 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.998 { 00:04:56.998 "name": "Malloc0", 00:04:56.998 "aliases": [ 00:04:56.998 "d0466a15-65c9-47cd-b026-a2a680f615dc" 00:04:56.998 ], 00:04:56.998 "product_name": "Malloc disk", 00:04:56.998 "block_size": 512, 00:04:56.998 "num_blocks": 16384, 00:04:56.998 "uuid": "d0466a15-65c9-47cd-b026-a2a680f615dc", 00:04:56.998 "assigned_rate_limits": { 00:04:56.998 "rw_ios_per_sec": 0, 00:04:56.998 "rw_mbytes_per_sec": 0, 00:04:56.998 "r_mbytes_per_sec": 0, 00:04:56.998 "w_mbytes_per_sec": 0 00:04:56.998 }, 00:04:56.998 "claimed": true, 00:04:56.998 "claim_type": "exclusive_write", 00:04:56.998 "zoned": false, 00:04:56.998 "supported_io_types": { 00:04:56.998 "read": true, 00:04:56.998 "write": true, 00:04:56.998 "unmap": true, 00:04:56.998 "flush": true, 00:04:56.998 "reset": true, 00:04:56.998 "nvme_admin": false, 00:04:56.998 "nvme_io": false, 00:04:56.998 "nvme_io_md": false, 00:04:56.998 "write_zeroes": true, 00:04:56.998 "zcopy": true, 00:04:56.998 "get_zone_info": false, 00:04:56.998 "zone_management": false, 00:04:56.998 "zone_append": false, 00:04:56.998 "compare": false, 00:04:56.998 "compare_and_write": false, 00:04:56.998 "abort": true, 00:04:56.998 "seek_hole": false, 00:04:56.998 "seek_data": false, 00:04:56.998 "copy": true, 00:04:56.998 "nvme_iov_md": false 00:04:56.998 }, 00:04:56.998 "memory_domains": [ 00:04:56.998 { 00:04:56.998 "dma_device_id": "system", 00:04:56.998 "dma_device_type": 1 00:04:56.998 }, 00:04:56.998 { 00:04:56.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.998 "dma_device_type": 2 00:04:56.998 } 00:04:56.998 ], 00:04:56.998 "driver_specific": {} 00:04:56.998 }, 00:04:56.998 { 00:04:56.998 "name": "Passthru0", 00:04:56.998 "aliases": [ 00:04:56.998 "4d138e32-7159-554b-8d86-f98adbcdd990" 00:04:56.998 ], 00:04:56.998 "product_name": "passthru", 00:04:56.998 "block_size": 512, 00:04:56.998 "num_blocks": 16384, 00:04:56.998 "uuid": "4d138e32-7159-554b-8d86-f98adbcdd990", 00:04:56.998 "assigned_rate_limits": { 00:04:56.998 "rw_ios_per_sec": 0, 00:04:56.998 "rw_mbytes_per_sec": 0, 00:04:56.998 "r_mbytes_per_sec": 0, 00:04:56.998 "w_mbytes_per_sec": 0 00:04:56.998 }, 00:04:56.999 "claimed": false, 00:04:56.999 "zoned": false, 00:04:56.999 "supported_io_types": { 00:04:56.999 "read": true, 00:04:56.999 "write": true, 00:04:56.999 "unmap": true, 00:04:56.999 "flush": true, 00:04:56.999 "reset": true, 00:04:56.999 "nvme_admin": false, 00:04:56.999 "nvme_io": false, 00:04:56.999 "nvme_io_md": false, 00:04:56.999 "write_zeroes": true, 00:04:56.999 "zcopy": true, 00:04:56.999 "get_zone_info": false, 00:04:56.999 "zone_management": false, 00:04:56.999 "zone_append": false, 00:04:56.999 "compare": false, 00:04:56.999 "compare_and_write": false, 00:04:56.999 "abort": true, 00:04:56.999 "seek_hole": false, 00:04:56.999 "seek_data": false, 00:04:56.999 "copy": true, 00:04:56.999 "nvme_iov_md": false 00:04:56.999 }, 00:04:56.999 "memory_domains": [ 00:04:56.999 { 00:04:56.999 "dma_device_id": "system", 00:04:56.999 "dma_device_type": 1 00:04:56.999 }, 00:04:56.999 { 00:04:56.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.999 "dma_device_type": 2 00:04:56.999 } 00:04:56.999 ], 00:04:56.999 "driver_specific": { 00:04:56.999 "passthru": { 00:04:56.999 "name": "Passthru0", 00:04:56.999 "base_bdev_name": "Malloc0" 00:04:56.999 } 00:04:56.999 } 00:04:56.999 } 00:04:56.999 ]' 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.999 05:43:04 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.999 00:04:56.999 real 0m0.342s 00:04:56.999 user 0m0.172s 00:04:56.999 sys 0m0.064s 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.999 05:43:04 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.999 ************************************ 00:04:56.999 END TEST rpc_integrity 00:04:56.999 ************************************ 00:04:57.259 05:43:04 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:57.259 05:43:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.259 05:43:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.259 05:43:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 ************************************ 00:04:57.259 START TEST rpc_plugins 00:04:57.259 ************************************ 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:57.259 { 00:04:57.259 "name": "Malloc1", 00:04:57.259 "aliases": [ 00:04:57.259 "2a977b05-e1df-4328-9ed7-b11a95f828b1" 00:04:57.259 ], 00:04:57.259 "product_name": "Malloc disk", 00:04:57.259 "block_size": 4096, 00:04:57.259 "num_blocks": 256, 00:04:57.259 "uuid": "2a977b05-e1df-4328-9ed7-b11a95f828b1", 00:04:57.259 "assigned_rate_limits": { 00:04:57.259 "rw_ios_per_sec": 0, 00:04:57.259 "rw_mbytes_per_sec": 0, 00:04:57.259 "r_mbytes_per_sec": 0, 00:04:57.259 "w_mbytes_per_sec": 0 00:04:57.259 }, 00:04:57.259 "claimed": false, 00:04:57.259 "zoned": false, 00:04:57.259 "supported_io_types": { 00:04:57.259 "read": true, 00:04:57.259 "write": true, 00:04:57.259 "unmap": true, 00:04:57.259 "flush": true, 00:04:57.259 "reset": true, 00:04:57.259 "nvme_admin": false, 00:04:57.259 "nvme_io": false, 00:04:57.259 "nvme_io_md": false, 00:04:57.259 "write_zeroes": true, 00:04:57.259 "zcopy": true, 00:04:57.259 "get_zone_info": false, 00:04:57.259 "zone_management": false, 00:04:57.259 "zone_append": false, 00:04:57.259 "compare": false, 00:04:57.259 "compare_and_write": false, 00:04:57.259 "abort": true, 00:04:57.259 "seek_hole": false, 00:04:57.259 "seek_data": false, 00:04:57.259 "copy": true, 00:04:57.259 "nvme_iov_md": false 00:04:57.259 }, 00:04:57.259 "memory_domains": [ 00:04:57.259 { 00:04:57.259 "dma_device_id": "system", 00:04:57.259 "dma_device_type": 1 00:04:57.259 }, 00:04:57.259 { 00:04:57.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.259 "dma_device_type": 2 00:04:57.259 } 00:04:57.259 ], 00:04:57.259 "driver_specific": {} 00:04:57.259 } 00:04:57.259 ]' 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.259 05:43:04 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.259 00:04:57.259 real 0m0.154s 00:04:57.259 user 0m0.080s 00:04:57.259 sys 0m0.028s 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.259 05:43:04 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.259 ************************************ 00:04:57.259 END TEST rpc_plugins 00:04:57.259 ************************************ 00:04:57.260 05:43:04 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.260 05:43:04 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.260 05:43:04 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.260 05:43:04 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 ************************************ 00:04:57.520 START TEST rpc_trace_cmd_test 00:04:57.520 ************************************ 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.520 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57987", 00:04:57.520 "tpoint_group_mask": "0x8", 00:04:57.520 "iscsi_conn": { 00:04:57.520 "mask": "0x2", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "scsi": { 00:04:57.520 "mask": "0x4", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "bdev": { 00:04:57.520 "mask": "0x8", 00:04:57.520 "tpoint_mask": "0xffffffffffffffff" 00:04:57.520 }, 00:04:57.520 "nvmf_rdma": { 00:04:57.520 "mask": "0x10", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "nvmf_tcp": { 00:04:57.520 "mask": "0x20", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "ftl": { 00:04:57.520 "mask": "0x40", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "blobfs": { 00:04:57.520 "mask": "0x80", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "dsa": { 00:04:57.520 "mask": "0x200", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "thread": { 00:04:57.520 "mask": "0x400", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "nvme_pcie": { 00:04:57.520 "mask": "0x800", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "iaa": { 00:04:57.520 "mask": "0x1000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "nvme_tcp": { 00:04:57.520 "mask": "0x2000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "bdev_nvme": { 00:04:57.520 "mask": "0x4000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "sock": { 00:04:57.520 "mask": "0x8000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "blob": { 00:04:57.520 "mask": "0x10000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "bdev_raid": { 00:04:57.520 "mask": "0x20000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 }, 00:04:57.520 "scheduler": { 00:04:57.520 "mask": "0x40000", 00:04:57.520 "tpoint_mask": "0x0" 00:04:57.520 } 00:04:57.520 }' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.520 05:43:04 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.520 05:43:05 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.520 00:04:57.520 real 0m0.230s 00:04:57.520 user 0m0.184s 00:04:57.520 sys 0m0.039s 00:04:57.520 05:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.520 05:43:05 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.520 ************************************ 00:04:57.520 END TEST rpc_trace_cmd_test 00:04:57.520 ************************************ 00:04:57.781 05:43:05 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.781 05:43:05 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.781 05:43:05 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.781 05:43:05 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.781 05:43:05 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.781 05:43:05 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 ************************************ 00:04:57.781 START TEST rpc_daemon_integrity 00:04:57.781 ************************************ 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.781 { 00:04:57.781 "name": "Malloc2", 00:04:57.781 "aliases": [ 00:04:57.781 "e6c2a9c9-86d8-4d37-b48b-008d533bbf81" 00:04:57.781 ], 00:04:57.781 "product_name": "Malloc disk", 00:04:57.781 "block_size": 512, 00:04:57.781 "num_blocks": 16384, 00:04:57.781 "uuid": "e6c2a9c9-86d8-4d37-b48b-008d533bbf81", 00:04:57.781 "assigned_rate_limits": { 00:04:57.781 "rw_ios_per_sec": 0, 00:04:57.781 "rw_mbytes_per_sec": 0, 00:04:57.781 "r_mbytes_per_sec": 0, 00:04:57.781 "w_mbytes_per_sec": 0 00:04:57.781 }, 00:04:57.781 "claimed": false, 00:04:57.781 "zoned": false, 00:04:57.781 "supported_io_types": { 00:04:57.781 "read": true, 00:04:57.781 "write": true, 00:04:57.781 "unmap": true, 00:04:57.781 "flush": true, 00:04:57.781 "reset": true, 00:04:57.781 "nvme_admin": false, 00:04:57.781 "nvme_io": false, 00:04:57.781 "nvme_io_md": false, 00:04:57.781 "write_zeroes": true, 00:04:57.781 "zcopy": true, 00:04:57.781 "get_zone_info": false, 00:04:57.781 "zone_management": false, 00:04:57.781 "zone_append": false, 00:04:57.781 "compare": false, 00:04:57.781 "compare_and_write": false, 00:04:57.781 "abort": true, 00:04:57.781 "seek_hole": false, 00:04:57.781 "seek_data": false, 00:04:57.781 "copy": true, 00:04:57.781 "nvme_iov_md": false 00:04:57.781 }, 00:04:57.781 "memory_domains": [ 00:04:57.781 { 00:04:57.781 "dma_device_id": "system", 00:04:57.781 "dma_device_type": 1 00:04:57.781 }, 00:04:57.781 { 00:04:57.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.781 "dma_device_type": 2 00:04:57.781 } 00:04:57.781 ], 00:04:57.781 "driver_specific": {} 00:04:57.781 } 00:04:57.781 ]' 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 [2024-12-12 05:43:05.241361] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.781 [2024-12-12 05:43:05.241415] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.781 [2024-12-12 05:43:05.241435] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:57.781 [2024-12-12 05:43:05.241446] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.781 [2024-12-12 05:43:05.243711] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.781 [2024-12-12 05:43:05.243749] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.781 Passthru0 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.781 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.781 { 00:04:57.781 "name": "Malloc2", 00:04:57.781 "aliases": [ 00:04:57.781 "e6c2a9c9-86d8-4d37-b48b-008d533bbf81" 00:04:57.781 ], 00:04:57.781 "product_name": "Malloc disk", 00:04:57.781 "block_size": 512, 00:04:57.781 "num_blocks": 16384, 00:04:57.781 "uuid": "e6c2a9c9-86d8-4d37-b48b-008d533bbf81", 00:04:57.781 "assigned_rate_limits": { 00:04:57.781 "rw_ios_per_sec": 0, 00:04:57.781 "rw_mbytes_per_sec": 0, 00:04:57.781 "r_mbytes_per_sec": 0, 00:04:57.781 "w_mbytes_per_sec": 0 00:04:57.781 }, 00:04:57.781 "claimed": true, 00:04:57.781 "claim_type": "exclusive_write", 00:04:57.781 "zoned": false, 00:04:57.781 "supported_io_types": { 00:04:57.781 "read": true, 00:04:57.781 "write": true, 00:04:57.781 "unmap": true, 00:04:57.781 "flush": true, 00:04:57.781 "reset": true, 00:04:57.781 "nvme_admin": false, 00:04:57.781 "nvme_io": false, 00:04:57.781 "nvme_io_md": false, 00:04:57.781 "write_zeroes": true, 00:04:57.781 "zcopy": true, 00:04:57.781 "get_zone_info": false, 00:04:57.781 "zone_management": false, 00:04:57.781 "zone_append": false, 00:04:57.781 "compare": false, 00:04:57.781 "compare_and_write": false, 00:04:57.781 "abort": true, 00:04:57.781 "seek_hole": false, 00:04:57.781 "seek_data": false, 00:04:57.781 "copy": true, 00:04:57.781 "nvme_iov_md": false 00:04:57.781 }, 00:04:57.781 "memory_domains": [ 00:04:57.781 { 00:04:57.781 "dma_device_id": "system", 00:04:57.781 "dma_device_type": 1 00:04:57.781 }, 00:04:57.781 { 00:04:57.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.781 "dma_device_type": 2 00:04:57.781 } 00:04:57.781 ], 00:04:57.781 "driver_specific": {} 00:04:57.781 }, 00:04:57.781 { 00:04:57.781 "name": "Passthru0", 00:04:57.781 "aliases": [ 00:04:57.781 "205774d5-2fe3-5080-8515-a06bc9d534ce" 00:04:57.781 ], 00:04:57.781 "product_name": "passthru", 00:04:57.781 "block_size": 512, 00:04:57.781 "num_blocks": 16384, 00:04:57.781 "uuid": "205774d5-2fe3-5080-8515-a06bc9d534ce", 00:04:57.781 "assigned_rate_limits": { 00:04:57.781 "rw_ios_per_sec": 0, 00:04:57.781 "rw_mbytes_per_sec": 0, 00:04:57.781 "r_mbytes_per_sec": 0, 00:04:57.781 "w_mbytes_per_sec": 0 00:04:57.781 }, 00:04:57.781 "claimed": false, 00:04:57.781 "zoned": false, 00:04:57.781 "supported_io_types": { 00:04:57.781 "read": true, 00:04:57.781 "write": true, 00:04:57.781 "unmap": true, 00:04:57.781 "flush": true, 00:04:57.781 "reset": true, 00:04:57.781 "nvme_admin": false, 00:04:57.782 "nvme_io": false, 00:04:57.782 "nvme_io_md": false, 00:04:57.782 "write_zeroes": true, 00:04:57.782 "zcopy": true, 00:04:57.782 "get_zone_info": false, 00:04:57.782 "zone_management": false, 00:04:57.782 "zone_append": false, 00:04:57.782 "compare": false, 00:04:57.782 "compare_and_write": false, 00:04:57.782 "abort": true, 00:04:57.782 "seek_hole": false, 00:04:57.782 "seek_data": false, 00:04:57.782 "copy": true, 00:04:57.782 "nvme_iov_md": false 00:04:57.782 }, 00:04:57.782 "memory_domains": [ 00:04:57.782 { 00:04:57.782 "dma_device_id": "system", 00:04:57.782 "dma_device_type": 1 00:04:57.782 }, 00:04:57.782 { 00:04:57.782 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.782 "dma_device_type": 2 00:04:57.782 } 00:04:57.782 ], 00:04:57.782 "driver_specific": { 00:04:57.782 "passthru": { 00:04:57.782 "name": "Passthru0", 00:04:57.782 "base_bdev_name": "Malloc2" 00:04:57.782 } 00:04:57.782 } 00:04:57.782 } 00:04:57.782 ]' 00:04:57.782 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.040 00:04:58.040 real 0m0.336s 00:04:58.040 user 0m0.179s 00:04:58.040 sys 0m0.060s 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:58.040 05:43:05 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.040 ************************************ 00:04:58.040 END TEST rpc_daemon_integrity 00:04:58.040 ************************************ 00:04:58.040 05:43:05 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:58.040 05:43:05 rpc -- rpc/rpc.sh@84 -- # killprocess 57987 00:04:58.040 05:43:05 rpc -- common/autotest_common.sh@954 -- # '[' -z 57987 ']' 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@958 -- # kill -0 57987 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@959 -- # uname 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57987 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57987' 00:04:58.041 killing process with pid 57987 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@973 -- # kill 57987 00:04:58.041 05:43:05 rpc -- common/autotest_common.sh@978 -- # wait 57987 00:05:00.581 00:05:00.581 real 0m5.050s 00:05:00.581 user 0m5.523s 00:05:00.581 sys 0m0.928s 00:05:00.581 05:43:07 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.581 05:43:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.581 ************************************ 00:05:00.581 END TEST rpc 00:05:00.581 ************************************ 00:05:00.581 05:43:07 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.581 05:43:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.581 05:43:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.581 05:43:07 -- common/autotest_common.sh@10 -- # set +x 00:05:00.581 ************************************ 00:05:00.581 START TEST skip_rpc 00:05:00.581 ************************************ 00:05:00.581 05:43:07 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:00.581 * Looking for test storage... 00:05:00.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:00.581 05:43:07 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.581 05:43:07 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.581 05:43:07 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.581 05:43:08 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.581 --rc genhtml_branch_coverage=1 00:05:00.581 --rc genhtml_function_coverage=1 00:05:00.581 --rc genhtml_legend=1 00:05:00.581 --rc geninfo_all_blocks=1 00:05:00.581 --rc geninfo_unexecuted_blocks=1 00:05:00.581 00:05:00.581 ' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.581 --rc genhtml_branch_coverage=1 00:05:00.581 --rc genhtml_function_coverage=1 00:05:00.581 --rc genhtml_legend=1 00:05:00.581 --rc geninfo_all_blocks=1 00:05:00.581 --rc geninfo_unexecuted_blocks=1 00:05:00.581 00:05:00.581 ' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.581 --rc genhtml_branch_coverage=1 00:05:00.581 --rc genhtml_function_coverage=1 00:05:00.581 --rc genhtml_legend=1 00:05:00.581 --rc geninfo_all_blocks=1 00:05:00.581 --rc geninfo_unexecuted_blocks=1 00:05:00.581 00:05:00.581 ' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.581 --rc genhtml_branch_coverage=1 00:05:00.581 --rc genhtml_function_coverage=1 00:05:00.581 --rc genhtml_legend=1 00:05:00.581 --rc geninfo_all_blocks=1 00:05:00.581 --rc geninfo_unexecuted_blocks=1 00:05:00.581 00:05:00.581 ' 00:05:00.581 05:43:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.581 05:43:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:00.581 05:43:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.581 05:43:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.581 ************************************ 00:05:00.581 START TEST skip_rpc 00:05:00.581 ************************************ 00:05:00.581 05:43:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:00.581 05:43:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58216 00:05:00.581 05:43:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:00.581 05:43:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.581 05:43:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:00.841 [2024-12-12 05:43:08.187718] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:00.841 [2024-12-12 05:43:08.187839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:05:00.841 [2024-12-12 05:43:08.361025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.101 [2024-12-12 05:43:08.468913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58216 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58216 ']' 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58216 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58216 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:06.418 killing process with pid 58216 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58216' 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58216 00:05:06.418 05:43:13 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58216 00:05:08.348 00:05:08.348 real 0m7.324s 00:05:08.348 user 0m6.861s 00:05:08.348 sys 0m0.386s 00:05:08.348 05:43:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.348 05:43:15 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.348 ************************************ 00:05:08.348 END TEST skip_rpc 00:05:08.348 ************************************ 00:05:08.348 05:43:15 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:08.348 05:43:15 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.348 05:43:15 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.348 05:43:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.348 ************************************ 00:05:08.348 START TEST skip_rpc_with_json 00:05:08.348 ************************************ 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58326 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58326 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58326 ']' 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.349 05:43:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.349 [2024-12-12 05:43:15.582758] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:08.349 [2024-12-12 05:43:15.582886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58326 ] 00:05:08.349 [2024-12-12 05:43:15.755881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.349 [2024-12-12 05:43:15.863378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.289 [2024-12-12 05:43:16.711114] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:09.289 request: 00:05:09.289 { 00:05:09.289 "trtype": "tcp", 00:05:09.289 "method": "nvmf_get_transports", 00:05:09.289 "req_id": 1 00:05:09.289 } 00:05:09.289 Got JSON-RPC error response 00:05:09.289 response: 00:05:09.289 { 00:05:09.289 "code": -19, 00:05:09.289 "message": "No such device" 00:05:09.289 } 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.289 [2024-12-12 05:43:16.723203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.289 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:09.549 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.549 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:09.549 { 00:05:09.549 "subsystems": [ 00:05:09.549 { 00:05:09.549 "subsystem": "fsdev", 00:05:09.549 "config": [ 00:05:09.549 { 00:05:09.549 "method": "fsdev_set_opts", 00:05:09.549 "params": { 00:05:09.549 "fsdev_io_pool_size": 65535, 00:05:09.549 "fsdev_io_cache_size": 256 00:05:09.549 } 00:05:09.549 } 00:05:09.549 ] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "keyring", 00:05:09.549 "config": [] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "iobuf", 00:05:09.549 "config": [ 00:05:09.549 { 00:05:09.549 "method": "iobuf_set_options", 00:05:09.549 "params": { 00:05:09.549 "small_pool_count": 8192, 00:05:09.549 "large_pool_count": 1024, 00:05:09.549 "small_bufsize": 8192, 00:05:09.549 "large_bufsize": 135168, 00:05:09.549 "enable_numa": false 00:05:09.549 } 00:05:09.549 } 00:05:09.549 ] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "sock", 00:05:09.549 "config": [ 00:05:09.549 { 00:05:09.549 "method": "sock_set_default_impl", 00:05:09.549 "params": { 00:05:09.549 "impl_name": "posix" 00:05:09.549 } 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "method": "sock_impl_set_options", 00:05:09.549 "params": { 00:05:09.549 "impl_name": "ssl", 00:05:09.549 "recv_buf_size": 4096, 00:05:09.549 "send_buf_size": 4096, 00:05:09.549 "enable_recv_pipe": true, 00:05:09.549 "enable_quickack": false, 00:05:09.549 "enable_placement_id": 0, 00:05:09.549 "enable_zerocopy_send_server": true, 00:05:09.549 "enable_zerocopy_send_client": false, 00:05:09.549 "zerocopy_threshold": 0, 00:05:09.549 "tls_version": 0, 00:05:09.549 "enable_ktls": false 00:05:09.549 } 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "method": "sock_impl_set_options", 00:05:09.549 "params": { 00:05:09.549 "impl_name": "posix", 00:05:09.549 "recv_buf_size": 2097152, 00:05:09.549 "send_buf_size": 2097152, 00:05:09.549 "enable_recv_pipe": true, 00:05:09.549 "enable_quickack": false, 00:05:09.549 "enable_placement_id": 0, 00:05:09.549 "enable_zerocopy_send_server": true, 00:05:09.549 "enable_zerocopy_send_client": false, 00:05:09.549 "zerocopy_threshold": 0, 00:05:09.549 "tls_version": 0, 00:05:09.549 "enable_ktls": false 00:05:09.549 } 00:05:09.549 } 00:05:09.549 ] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "vmd", 00:05:09.549 "config": [] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "accel", 00:05:09.549 "config": [ 00:05:09.549 { 00:05:09.549 "method": "accel_set_options", 00:05:09.549 "params": { 00:05:09.549 "small_cache_size": 128, 00:05:09.549 "large_cache_size": 16, 00:05:09.549 "task_count": 2048, 00:05:09.549 "sequence_count": 2048, 00:05:09.549 "buf_count": 2048 00:05:09.549 } 00:05:09.549 } 00:05:09.549 ] 00:05:09.549 }, 00:05:09.549 { 00:05:09.549 "subsystem": "bdev", 00:05:09.549 "config": [ 00:05:09.549 { 00:05:09.549 "method": "bdev_set_options", 00:05:09.550 "params": { 00:05:09.550 "bdev_io_pool_size": 65535, 00:05:09.550 "bdev_io_cache_size": 256, 00:05:09.550 "bdev_auto_examine": true, 00:05:09.550 "iobuf_small_cache_size": 128, 00:05:09.550 "iobuf_large_cache_size": 16 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "bdev_raid_set_options", 00:05:09.550 "params": { 00:05:09.550 "process_window_size_kb": 1024, 00:05:09.550 "process_max_bandwidth_mb_sec": 0 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "bdev_iscsi_set_options", 00:05:09.550 "params": { 00:05:09.550 "timeout_sec": 30 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "bdev_nvme_set_options", 00:05:09.550 "params": { 00:05:09.550 "action_on_timeout": "none", 00:05:09.550 "timeout_us": 0, 00:05:09.550 "timeout_admin_us": 0, 00:05:09.550 "keep_alive_timeout_ms": 10000, 00:05:09.550 "arbitration_burst": 0, 00:05:09.550 "low_priority_weight": 0, 00:05:09.550 "medium_priority_weight": 0, 00:05:09.550 "high_priority_weight": 0, 00:05:09.550 "nvme_adminq_poll_period_us": 10000, 00:05:09.550 "nvme_ioq_poll_period_us": 0, 00:05:09.550 "io_queue_requests": 0, 00:05:09.550 "delay_cmd_submit": true, 00:05:09.550 "transport_retry_count": 4, 00:05:09.550 "bdev_retry_count": 3, 00:05:09.550 "transport_ack_timeout": 0, 00:05:09.550 "ctrlr_loss_timeout_sec": 0, 00:05:09.550 "reconnect_delay_sec": 0, 00:05:09.550 "fast_io_fail_timeout_sec": 0, 00:05:09.550 "disable_auto_failback": false, 00:05:09.550 "generate_uuids": false, 00:05:09.550 "transport_tos": 0, 00:05:09.550 "nvme_error_stat": false, 00:05:09.550 "rdma_srq_size": 0, 00:05:09.550 "io_path_stat": false, 00:05:09.550 "allow_accel_sequence": false, 00:05:09.550 "rdma_max_cq_size": 0, 00:05:09.550 "rdma_cm_event_timeout_ms": 0, 00:05:09.550 "dhchap_digests": [ 00:05:09.550 "sha256", 00:05:09.550 "sha384", 00:05:09.550 "sha512" 00:05:09.550 ], 00:05:09.550 "dhchap_dhgroups": [ 00:05:09.550 "null", 00:05:09.550 "ffdhe2048", 00:05:09.550 "ffdhe3072", 00:05:09.550 "ffdhe4096", 00:05:09.550 "ffdhe6144", 00:05:09.550 "ffdhe8192" 00:05:09.550 ], 00:05:09.550 "rdma_umr_per_io": false 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "bdev_nvme_set_hotplug", 00:05:09.550 "params": { 00:05:09.550 "period_us": 100000, 00:05:09.550 "enable": false 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "bdev_wait_for_examine" 00:05:09.550 } 00:05:09.550 ] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "scsi", 00:05:09.550 "config": null 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "scheduler", 00:05:09.550 "config": [ 00:05:09.550 { 00:05:09.550 "method": "framework_set_scheduler", 00:05:09.550 "params": { 00:05:09.550 "name": "static" 00:05:09.550 } 00:05:09.550 } 00:05:09.550 ] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "vhost_scsi", 00:05:09.550 "config": [] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "vhost_blk", 00:05:09.550 "config": [] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "ublk", 00:05:09.550 "config": [] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "nbd", 00:05:09.550 "config": [] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "nvmf", 00:05:09.550 "config": [ 00:05:09.550 { 00:05:09.550 "method": "nvmf_set_config", 00:05:09.550 "params": { 00:05:09.550 "discovery_filter": "match_any", 00:05:09.550 "admin_cmd_passthru": { 00:05:09.550 "identify_ctrlr": false 00:05:09.550 }, 00:05:09.550 "dhchap_digests": [ 00:05:09.550 "sha256", 00:05:09.550 "sha384", 00:05:09.550 "sha512" 00:05:09.550 ], 00:05:09.550 "dhchap_dhgroups": [ 00:05:09.550 "null", 00:05:09.550 "ffdhe2048", 00:05:09.550 "ffdhe3072", 00:05:09.550 "ffdhe4096", 00:05:09.550 "ffdhe6144", 00:05:09.550 "ffdhe8192" 00:05:09.550 ] 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "nvmf_set_max_subsystems", 00:05:09.550 "params": { 00:05:09.550 "max_subsystems": 1024 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "nvmf_set_crdt", 00:05:09.550 "params": { 00:05:09.550 "crdt1": 0, 00:05:09.550 "crdt2": 0, 00:05:09.550 "crdt3": 0 00:05:09.550 } 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "method": "nvmf_create_transport", 00:05:09.550 "params": { 00:05:09.550 "trtype": "TCP", 00:05:09.550 "max_queue_depth": 128, 00:05:09.550 "max_io_qpairs_per_ctrlr": 127, 00:05:09.550 "in_capsule_data_size": 4096, 00:05:09.550 "max_io_size": 131072, 00:05:09.550 "io_unit_size": 131072, 00:05:09.550 "max_aq_depth": 128, 00:05:09.550 "num_shared_buffers": 511, 00:05:09.550 "buf_cache_size": 4294967295, 00:05:09.550 "dif_insert_or_strip": false, 00:05:09.550 "zcopy": false, 00:05:09.550 "c2h_success": true, 00:05:09.550 "sock_priority": 0, 00:05:09.550 "abort_timeout_sec": 1, 00:05:09.550 "ack_timeout": 0, 00:05:09.550 "data_wr_pool_size": 0 00:05:09.550 } 00:05:09.550 } 00:05:09.550 ] 00:05:09.550 }, 00:05:09.550 { 00:05:09.550 "subsystem": "iscsi", 00:05:09.550 "config": [ 00:05:09.550 { 00:05:09.550 "method": "iscsi_set_options", 00:05:09.550 "params": { 00:05:09.550 "node_base": "iqn.2016-06.io.spdk", 00:05:09.550 "max_sessions": 128, 00:05:09.550 "max_connections_per_session": 2, 00:05:09.550 "max_queue_depth": 64, 00:05:09.550 "default_time2wait": 2, 00:05:09.550 "default_time2retain": 20, 00:05:09.550 "first_burst_length": 8192, 00:05:09.550 "immediate_data": true, 00:05:09.550 "allow_duplicated_isid": false, 00:05:09.550 "error_recovery_level": 0, 00:05:09.550 "nop_timeout": 60, 00:05:09.550 "nop_in_interval": 30, 00:05:09.550 "disable_chap": false, 00:05:09.550 "require_chap": false, 00:05:09.550 "mutual_chap": false, 00:05:09.550 "chap_group": 0, 00:05:09.550 "max_large_datain_per_connection": 64, 00:05:09.550 "max_r2t_per_connection": 4, 00:05:09.550 "pdu_pool_size": 36864, 00:05:09.550 "immediate_data_pool_size": 16384, 00:05:09.550 "data_out_pool_size": 2048 00:05:09.550 } 00:05:09.550 } 00:05:09.550 ] 00:05:09.550 } 00:05:09.550 ] 00:05:09.550 } 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58326 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58326 ']' 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58326 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58326 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.550 killing process with pid 58326 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58326' 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58326 00:05:09.550 05:43:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58326 00:05:12.088 05:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58371 00:05:12.088 05:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.088 05:43:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58371 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58371 ']' 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58371 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58371 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.369 killing process with pid 58371 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58371' 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58371 00:05:17.369 05:43:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58371 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:19.319 00:05:19.319 real 0m10.977s 00:05:19.319 user 0m10.429s 00:05:19.319 sys 0m0.831s 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.319 ************************************ 00:05:19.319 END TEST skip_rpc_with_json 00:05:19.319 ************************************ 00:05:19.319 05:43:26 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.319 ************************************ 00:05:19.319 START TEST skip_rpc_with_delay 00:05:19.319 ************************************ 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:19.319 [2024-12-12 05:43:26.633937] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.319 00:05:19.319 real 0m0.163s 00:05:19.319 user 0m0.088s 00:05:19.319 sys 0m0.074s 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.319 05:43:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:19.319 ************************************ 00:05:19.319 END TEST skip_rpc_with_delay 00:05:19.319 ************************************ 00:05:19.319 05:43:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:19.319 05:43:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:19.319 05:43:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.319 05:43:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.319 ************************************ 00:05:19.319 START TEST exit_on_failed_rpc_init 00:05:19.319 ************************************ 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58501 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58501 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58501 ']' 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.319 05:43:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:19.579 [2024-12-12 05:43:26.862644] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:19.579 [2024-12-12 05:43:26.862747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58501 ] 00:05:19.579 [2024-12-12 05:43:27.036037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.839 [2024-12-12 05:43:27.143524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.777 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:20.778 05:43:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:20.778 [2024-12-12 05:43:28.066887] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:20.778 [2024-12-12 05:43:28.066990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58527 ] 00:05:20.778 [2024-12-12 05:43:28.241508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.037 [2024-12-12 05:43:28.349957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.037 [2024-12-12 05:43:28.350042] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:21.037 [2024-12-12 05:43:28.350055] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:21.037 [2024-12-12 05:43:28.350065] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58501 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58501 ']' 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58501 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58501 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:21.297 killing process with pid 58501 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58501' 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58501 00:05:21.297 05:43:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58501 00:05:23.833 00:05:23.833 real 0m4.120s 00:05:23.833 user 0m4.424s 00:05:23.833 sys 0m0.548s 00:05:23.833 05:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.833 05:43:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.833 ************************************ 00:05:23.833 END TEST exit_on_failed_rpc_init 00:05:23.833 ************************************ 00:05:23.833 05:43:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.833 00:05:23.833 real 0m23.113s 00:05:23.833 user 0m22.033s 00:05:23.833 sys 0m2.140s 00:05:23.833 05:43:30 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.833 05:43:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.833 ************************************ 00:05:23.833 END TEST skip_rpc 00:05:23.833 ************************************ 00:05:23.833 05:43:30 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.833 05:43:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.833 05:43:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.833 05:43:30 -- common/autotest_common.sh@10 -- # set +x 00:05:23.833 ************************************ 00:05:23.833 START TEST rpc_client 00:05:23.833 ************************************ 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:23.833 * Looking for test storage... 00:05:23.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.833 05:43:31 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.833 --rc genhtml_branch_coverage=1 00:05:23.833 --rc genhtml_function_coverage=1 00:05:23.833 --rc genhtml_legend=1 00:05:23.833 --rc geninfo_all_blocks=1 00:05:23.833 --rc geninfo_unexecuted_blocks=1 00:05:23.833 00:05:23.833 ' 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.833 --rc genhtml_branch_coverage=1 00:05:23.833 --rc genhtml_function_coverage=1 00:05:23.833 --rc genhtml_legend=1 00:05:23.833 --rc geninfo_all_blocks=1 00:05:23.833 --rc geninfo_unexecuted_blocks=1 00:05:23.833 00:05:23.833 ' 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.833 --rc genhtml_branch_coverage=1 00:05:23.833 --rc genhtml_function_coverage=1 00:05:23.833 --rc genhtml_legend=1 00:05:23.833 --rc geninfo_all_blocks=1 00:05:23.833 --rc geninfo_unexecuted_blocks=1 00:05:23.833 00:05:23.833 ' 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:23.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.833 --rc genhtml_branch_coverage=1 00:05:23.833 --rc genhtml_function_coverage=1 00:05:23.833 --rc genhtml_legend=1 00:05:23.833 --rc geninfo_all_blocks=1 00:05:23.833 --rc geninfo_unexecuted_blocks=1 00:05:23.833 00:05:23.833 ' 00:05:23.833 05:43:31 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:23.833 OK 00:05:23.833 05:43:31 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:23.833 00:05:23.833 real 0m0.287s 00:05:23.833 user 0m0.147s 00:05:23.833 sys 0m0.157s 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.833 05:43:31 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:23.833 ************************************ 00:05:23.833 END TEST rpc_client 00:05:23.833 ************************************ 00:05:23.833 05:43:31 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:23.833 05:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.833 05:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.833 05:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.094 ************************************ 00:05:24.094 START TEST json_config 00:05:24.094 ************************************ 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.094 05:43:31 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.094 05:43:31 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.094 05:43:31 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.094 05:43:31 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.094 05:43:31 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.094 05:43:31 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:24.094 05:43:31 json_config -- scripts/common.sh@345 -- # : 1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.094 05:43:31 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.094 05:43:31 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@353 -- # local d=1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.094 05:43:31 json_config -- scripts/common.sh@355 -- # echo 1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.094 05:43:31 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@353 -- # local d=2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.094 05:43:31 json_config -- scripts/common.sh@355 -- # echo 2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.094 05:43:31 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.094 05:43:31 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.094 05:43:31 json_config -- scripts/common.sh@368 -- # return 0 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.094 --rc genhtml_branch_coverage=1 00:05:24.094 --rc genhtml_function_coverage=1 00:05:24.094 --rc genhtml_legend=1 00:05:24.094 --rc geninfo_all_blocks=1 00:05:24.094 --rc geninfo_unexecuted_blocks=1 00:05:24.094 00:05:24.094 ' 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.094 --rc genhtml_branch_coverage=1 00:05:24.094 --rc genhtml_function_coverage=1 00:05:24.094 --rc genhtml_legend=1 00:05:24.094 --rc geninfo_all_blocks=1 00:05:24.094 --rc geninfo_unexecuted_blocks=1 00:05:24.094 00:05:24.094 ' 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.094 --rc genhtml_branch_coverage=1 00:05:24.094 --rc genhtml_function_coverage=1 00:05:24.094 --rc genhtml_legend=1 00:05:24.094 --rc geninfo_all_blocks=1 00:05:24.094 --rc geninfo_unexecuted_blocks=1 00:05:24.094 00:05:24.094 ' 00:05:24.094 05:43:31 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.094 --rc genhtml_branch_coverage=1 00:05:24.094 --rc genhtml_function_coverage=1 00:05:24.094 --rc genhtml_legend=1 00:05:24.094 --rc geninfo_all_blocks=1 00:05:24.094 --rc geninfo_unexecuted_blocks=1 00:05:24.094 00:05:24.094 ' 00:05:24.094 05:43:31 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dedb147-6356-40e2-9718-b1cf30e7de80 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=1dedb147-6356-40e2-9718-b1cf30e7de80 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.094 05:43:31 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.094 05:43:31 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.094 05:43:31 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.094 05:43:31 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.094 05:43:31 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.094 05:43:31 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.094 05:43:31 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.094 05:43:31 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.094 05:43:31 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@51 -- # : 0 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.094 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.094 05:43:31 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.095 05:43:31 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.095 05:43:31 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.095 WARNING: No tests are enabled so not running JSON configuration tests 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:24.095 05:43:31 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:24.095 00:05:24.095 real 0m0.222s 00:05:24.095 user 0m0.138s 00:05:24.095 sys 0m0.092s 00:05:24.095 05:43:31 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.095 05:43:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.095 ************************************ 00:05:24.095 END TEST json_config 00:05:24.095 ************************************ 00:05:24.355 05:43:31 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.355 05:43:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.355 05:43:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.355 05:43:31 -- common/autotest_common.sh@10 -- # set +x 00:05:24.355 ************************************ 00:05:24.355 START TEST json_config_extra_key 00:05:24.355 ************************************ 00:05:24.355 05:43:31 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:24.355 05:43:31 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.355 05:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.356 --rc genhtml_branch_coverage=1 00:05:24.356 --rc genhtml_function_coverage=1 00:05:24.356 --rc genhtml_legend=1 00:05:24.356 --rc geninfo_all_blocks=1 00:05:24.356 --rc geninfo_unexecuted_blocks=1 00:05:24.356 00:05:24.356 ' 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.356 --rc genhtml_branch_coverage=1 00:05:24.356 --rc genhtml_function_coverage=1 00:05:24.356 --rc genhtml_legend=1 00:05:24.356 --rc geninfo_all_blocks=1 00:05:24.356 --rc geninfo_unexecuted_blocks=1 00:05:24.356 00:05:24.356 ' 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.356 --rc genhtml_branch_coverage=1 00:05:24.356 --rc genhtml_function_coverage=1 00:05:24.356 --rc genhtml_legend=1 00:05:24.356 --rc geninfo_all_blocks=1 00:05:24.356 --rc geninfo_unexecuted_blocks=1 00:05:24.356 00:05:24.356 ' 00:05:24.356 05:43:31 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.356 --rc genhtml_branch_coverage=1 00:05:24.356 --rc genhtml_function_coverage=1 00:05:24.356 --rc genhtml_legend=1 00:05:24.356 --rc geninfo_all_blocks=1 00:05:24.356 --rc geninfo_unexecuted_blocks=1 00:05:24.356 00:05:24.356 ' 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:1dedb147-6356-40e2-9718-b1cf30e7de80 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=1dedb147-6356-40e2-9718-b1cf30e7de80 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.356 05:43:31 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.356 05:43:31 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.356 05:43:31 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.356 05:43:31 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.356 05:43:31 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:24.356 05:43:31 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.356 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.356 05:43:31 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.356 INFO: launching applications... 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:24.356 05:43:31 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.356 05:43:31 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58727 00:05:24.356 Waiting for target to run... 00:05:24.357 05:43:31 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.357 05:43:31 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58727 /var/tmp/spdk_tgt.sock 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58727 ']' 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.357 05:43:31 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.357 05:43:31 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:24.616 [2024-12-12 05:43:31.967209] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:24.616 [2024-12-12 05:43:31.967323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58727 ] 00:05:24.875 [2024-12-12 05:43:32.355526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.134 [2024-12-12 05:43:32.452215] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.704 05:43:33 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.704 05:43:33 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:25.704 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:25.704 INFO: shutting down applications... 00:05:25.704 05:43:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:25.704 05:43:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58727 ]] 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58727 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:25.704 05:43:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.273 05:43:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.273 05:43:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.273 05:43:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:26.273 05:43:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:26.843 05:43:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:26.843 05:43:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:26.843 05:43:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:26.843 05:43:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.412 05:43:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.412 05:43:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.412 05:43:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:27.412 05:43:34 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:27.671 05:43:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:27.671 05:43:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.671 05:43:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:27.671 05:43:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.240 05:43:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.240 05:43:35 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.240 05:43:35 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:28.240 05:43:35 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.809 05:43:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.809 05:43:36 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.810 05:43:36 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58727 00:05:28.810 SPDK target shutdown done 00:05:28.810 Success 00:05:28.810 05:43:36 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.810 05:43:36 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.810 05:43:36 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.810 05:43:36 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.810 05:43:36 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.810 00:05:28.810 real 0m4.541s 00:05:28.810 user 0m3.864s 00:05:28.810 sys 0m0.542s 00:05:28.810 05:43:36 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.810 05:43:36 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.810 ************************************ 00:05:28.810 END TEST json_config_extra_key 00:05:28.810 ************************************ 00:05:28.810 05:43:36 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.810 05:43:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.810 05:43:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.810 05:43:36 -- common/autotest_common.sh@10 -- # set +x 00:05:28.810 ************************************ 00:05:28.810 START TEST alias_rpc 00:05:28.810 ************************************ 00:05:28.810 05:43:36 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:29.072 * Looking for test storage... 00:05:29.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.072 05:43:36 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.072 --rc genhtml_branch_coverage=1 00:05:29.072 --rc genhtml_function_coverage=1 00:05:29.072 --rc genhtml_legend=1 00:05:29.072 --rc geninfo_all_blocks=1 00:05:29.072 --rc geninfo_unexecuted_blocks=1 00:05:29.072 00:05:29.072 ' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.072 --rc genhtml_branch_coverage=1 00:05:29.072 --rc genhtml_function_coverage=1 00:05:29.072 --rc genhtml_legend=1 00:05:29.072 --rc geninfo_all_blocks=1 00:05:29.072 --rc geninfo_unexecuted_blocks=1 00:05:29.072 00:05:29.072 ' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.072 --rc genhtml_branch_coverage=1 00:05:29.072 --rc genhtml_function_coverage=1 00:05:29.072 --rc genhtml_legend=1 00:05:29.072 --rc geninfo_all_blocks=1 00:05:29.072 --rc geninfo_unexecuted_blocks=1 00:05:29.072 00:05:29.072 ' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.072 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.072 --rc genhtml_branch_coverage=1 00:05:29.072 --rc genhtml_function_coverage=1 00:05:29.072 --rc genhtml_legend=1 00:05:29.072 --rc geninfo_all_blocks=1 00:05:29.072 --rc geninfo_unexecuted_blocks=1 00:05:29.072 00:05:29.072 ' 00:05:29.072 05:43:36 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:29.072 05:43:36 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:29.072 05:43:36 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58844 00:05:29.072 05:43:36 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58844 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58844 ']' 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.072 05:43:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.072 [2024-12-12 05:43:36.572357] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:29.072 [2024-12-12 05:43:36.572485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58844 ] 00:05:29.341 [2024-12-12 05:43:36.747691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.341 [2024-12-12 05:43:36.853506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.290 05:43:37 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.290 05:43:37 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.290 05:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:30.550 05:43:37 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58844 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58844 ']' 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58844 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58844 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.550 killing process with pid 58844 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58844' 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@973 -- # kill 58844 00:05:30.550 05:43:37 alias_rpc -- common/autotest_common.sh@978 -- # wait 58844 00:05:33.088 ************************************ 00:05:33.088 END TEST alias_rpc 00:05:33.088 ************************************ 00:05:33.088 00:05:33.088 real 0m3.975s 00:05:33.088 user 0m3.956s 00:05:33.088 sys 0m0.562s 00:05:33.088 05:43:40 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.088 05:43:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.088 05:43:40 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:33.088 05:43:40 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:33.088 05:43:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.088 05:43:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.088 05:43:40 -- common/autotest_common.sh@10 -- # set +x 00:05:33.088 ************************************ 00:05:33.088 START TEST spdkcli_tcp 00:05:33.088 ************************************ 00:05:33.088 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:33.088 * Looking for test storage... 00:05:33.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:33.088 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.088 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.088 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.088 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.088 05:43:40 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.089 05:43:40 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.089 --rc genhtml_branch_coverage=1 00:05:33.089 --rc genhtml_function_coverage=1 00:05:33.089 --rc genhtml_legend=1 00:05:33.089 --rc geninfo_all_blocks=1 00:05:33.089 --rc geninfo_unexecuted_blocks=1 00:05:33.089 00:05:33.089 ' 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.089 --rc genhtml_branch_coverage=1 00:05:33.089 --rc genhtml_function_coverage=1 00:05:33.089 --rc genhtml_legend=1 00:05:33.089 --rc geninfo_all_blocks=1 00:05:33.089 --rc geninfo_unexecuted_blocks=1 00:05:33.089 00:05:33.089 ' 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.089 --rc genhtml_branch_coverage=1 00:05:33.089 --rc genhtml_function_coverage=1 00:05:33.089 --rc genhtml_legend=1 00:05:33.089 --rc geninfo_all_blocks=1 00:05:33.089 --rc geninfo_unexecuted_blocks=1 00:05:33.089 00:05:33.089 ' 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.089 --rc genhtml_branch_coverage=1 00:05:33.089 --rc genhtml_function_coverage=1 00:05:33.089 --rc genhtml_legend=1 00:05:33.089 --rc geninfo_all_blocks=1 00:05:33.089 --rc geninfo_unexecuted_blocks=1 00:05:33.089 00:05:33.089 ' 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58946 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:33.089 05:43:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58946 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58946 ']' 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.089 05:43:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:33.349 [2024-12-12 05:43:40.623384] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:33.349 [2024-12-12 05:43:40.623567] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58946 ] 00:05:33.349 [2024-12-12 05:43:40.799470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.608 [2024-12-12 05:43:40.911358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.608 [2024-12-12 05:43:40.911396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.548 05:43:41 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.548 05:43:41 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:34.548 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:34.548 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58968 00:05:34.548 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:34.548 [ 00:05:34.548 "bdev_malloc_delete", 00:05:34.548 "bdev_malloc_create", 00:05:34.548 "bdev_null_resize", 00:05:34.548 "bdev_null_delete", 00:05:34.548 "bdev_null_create", 00:05:34.548 "bdev_nvme_cuse_unregister", 00:05:34.548 "bdev_nvme_cuse_register", 00:05:34.548 "bdev_opal_new_user", 00:05:34.548 "bdev_opal_set_lock_state", 00:05:34.548 "bdev_opal_delete", 00:05:34.548 "bdev_opal_get_info", 00:05:34.548 "bdev_opal_create", 00:05:34.548 "bdev_nvme_opal_revert", 00:05:34.548 "bdev_nvme_opal_init", 00:05:34.548 "bdev_nvme_send_cmd", 00:05:34.548 "bdev_nvme_set_keys", 00:05:34.548 "bdev_nvme_get_path_iostat", 00:05:34.548 "bdev_nvme_get_mdns_discovery_info", 00:05:34.548 "bdev_nvme_stop_mdns_discovery", 00:05:34.548 "bdev_nvme_start_mdns_discovery", 00:05:34.548 "bdev_nvme_set_multipath_policy", 00:05:34.548 "bdev_nvme_set_preferred_path", 00:05:34.548 "bdev_nvme_get_io_paths", 00:05:34.548 "bdev_nvme_remove_error_injection", 00:05:34.548 "bdev_nvme_add_error_injection", 00:05:34.548 "bdev_nvme_get_discovery_info", 00:05:34.548 "bdev_nvme_stop_discovery", 00:05:34.548 "bdev_nvme_start_discovery", 00:05:34.548 "bdev_nvme_get_controller_health_info", 00:05:34.548 "bdev_nvme_disable_controller", 00:05:34.548 "bdev_nvme_enable_controller", 00:05:34.548 "bdev_nvme_reset_controller", 00:05:34.548 "bdev_nvme_get_transport_statistics", 00:05:34.548 "bdev_nvme_apply_firmware", 00:05:34.548 "bdev_nvme_detach_controller", 00:05:34.548 "bdev_nvme_get_controllers", 00:05:34.548 "bdev_nvme_attach_controller", 00:05:34.548 "bdev_nvme_set_hotplug", 00:05:34.548 "bdev_nvme_set_options", 00:05:34.548 "bdev_passthru_delete", 00:05:34.548 "bdev_passthru_create", 00:05:34.548 "bdev_lvol_set_parent_bdev", 00:05:34.548 "bdev_lvol_set_parent", 00:05:34.548 "bdev_lvol_check_shallow_copy", 00:05:34.548 "bdev_lvol_start_shallow_copy", 00:05:34.548 "bdev_lvol_grow_lvstore", 00:05:34.548 "bdev_lvol_get_lvols", 00:05:34.548 "bdev_lvol_get_lvstores", 00:05:34.548 "bdev_lvol_delete", 00:05:34.548 "bdev_lvol_set_read_only", 00:05:34.548 "bdev_lvol_resize", 00:05:34.548 "bdev_lvol_decouple_parent", 00:05:34.548 "bdev_lvol_inflate", 00:05:34.548 "bdev_lvol_rename", 00:05:34.548 "bdev_lvol_clone_bdev", 00:05:34.548 "bdev_lvol_clone", 00:05:34.548 "bdev_lvol_snapshot", 00:05:34.548 "bdev_lvol_create", 00:05:34.548 "bdev_lvol_delete_lvstore", 00:05:34.548 "bdev_lvol_rename_lvstore", 00:05:34.548 "bdev_lvol_create_lvstore", 00:05:34.548 "bdev_raid_set_options", 00:05:34.548 "bdev_raid_remove_base_bdev", 00:05:34.548 "bdev_raid_add_base_bdev", 00:05:34.548 "bdev_raid_delete", 00:05:34.548 "bdev_raid_create", 00:05:34.548 "bdev_raid_get_bdevs", 00:05:34.548 "bdev_error_inject_error", 00:05:34.548 "bdev_error_delete", 00:05:34.548 "bdev_error_create", 00:05:34.548 "bdev_split_delete", 00:05:34.548 "bdev_split_create", 00:05:34.548 "bdev_delay_delete", 00:05:34.548 "bdev_delay_create", 00:05:34.548 "bdev_delay_update_latency", 00:05:34.548 "bdev_zone_block_delete", 00:05:34.548 "bdev_zone_block_create", 00:05:34.548 "blobfs_create", 00:05:34.548 "blobfs_detect", 00:05:34.548 "blobfs_set_cache_size", 00:05:34.548 "bdev_aio_delete", 00:05:34.548 "bdev_aio_rescan", 00:05:34.548 "bdev_aio_create", 00:05:34.548 "bdev_ftl_set_property", 00:05:34.548 "bdev_ftl_get_properties", 00:05:34.549 "bdev_ftl_get_stats", 00:05:34.549 "bdev_ftl_unmap", 00:05:34.549 "bdev_ftl_unload", 00:05:34.549 "bdev_ftl_delete", 00:05:34.549 "bdev_ftl_load", 00:05:34.549 "bdev_ftl_create", 00:05:34.549 "bdev_virtio_attach_controller", 00:05:34.549 "bdev_virtio_scsi_get_devices", 00:05:34.549 "bdev_virtio_detach_controller", 00:05:34.549 "bdev_virtio_blk_set_hotplug", 00:05:34.549 "bdev_iscsi_delete", 00:05:34.549 "bdev_iscsi_create", 00:05:34.549 "bdev_iscsi_set_options", 00:05:34.549 "accel_error_inject_error", 00:05:34.549 "ioat_scan_accel_module", 00:05:34.549 "dsa_scan_accel_module", 00:05:34.549 "iaa_scan_accel_module", 00:05:34.549 "keyring_file_remove_key", 00:05:34.549 "keyring_file_add_key", 00:05:34.549 "keyring_linux_set_options", 00:05:34.549 "fsdev_aio_delete", 00:05:34.549 "fsdev_aio_create", 00:05:34.549 "iscsi_get_histogram", 00:05:34.549 "iscsi_enable_histogram", 00:05:34.549 "iscsi_set_options", 00:05:34.549 "iscsi_get_auth_groups", 00:05:34.549 "iscsi_auth_group_remove_secret", 00:05:34.549 "iscsi_auth_group_add_secret", 00:05:34.549 "iscsi_delete_auth_group", 00:05:34.549 "iscsi_create_auth_group", 00:05:34.549 "iscsi_set_discovery_auth", 00:05:34.549 "iscsi_get_options", 00:05:34.549 "iscsi_target_node_request_logout", 00:05:34.549 "iscsi_target_node_set_redirect", 00:05:34.549 "iscsi_target_node_set_auth", 00:05:34.549 "iscsi_target_node_add_lun", 00:05:34.549 "iscsi_get_stats", 00:05:34.549 "iscsi_get_connections", 00:05:34.549 "iscsi_portal_group_set_auth", 00:05:34.549 "iscsi_start_portal_group", 00:05:34.549 "iscsi_delete_portal_group", 00:05:34.549 "iscsi_create_portal_group", 00:05:34.549 "iscsi_get_portal_groups", 00:05:34.549 "iscsi_delete_target_node", 00:05:34.549 "iscsi_target_node_remove_pg_ig_maps", 00:05:34.549 "iscsi_target_node_add_pg_ig_maps", 00:05:34.549 "iscsi_create_target_node", 00:05:34.549 "iscsi_get_target_nodes", 00:05:34.549 "iscsi_delete_initiator_group", 00:05:34.549 "iscsi_initiator_group_remove_initiators", 00:05:34.549 "iscsi_initiator_group_add_initiators", 00:05:34.549 "iscsi_create_initiator_group", 00:05:34.549 "iscsi_get_initiator_groups", 00:05:34.549 "nvmf_set_crdt", 00:05:34.549 "nvmf_set_config", 00:05:34.549 "nvmf_set_max_subsystems", 00:05:34.549 "nvmf_stop_mdns_prr", 00:05:34.549 "nvmf_publish_mdns_prr", 00:05:34.549 "nvmf_subsystem_get_listeners", 00:05:34.549 "nvmf_subsystem_get_qpairs", 00:05:34.549 "nvmf_subsystem_get_controllers", 00:05:34.549 "nvmf_get_stats", 00:05:34.549 "nvmf_get_transports", 00:05:34.549 "nvmf_create_transport", 00:05:34.549 "nvmf_get_targets", 00:05:34.549 "nvmf_delete_target", 00:05:34.549 "nvmf_create_target", 00:05:34.549 "nvmf_subsystem_allow_any_host", 00:05:34.549 "nvmf_subsystem_set_keys", 00:05:34.549 "nvmf_subsystem_remove_host", 00:05:34.549 "nvmf_subsystem_add_host", 00:05:34.549 "nvmf_ns_remove_host", 00:05:34.549 "nvmf_ns_add_host", 00:05:34.549 "nvmf_subsystem_remove_ns", 00:05:34.549 "nvmf_subsystem_set_ns_ana_group", 00:05:34.549 "nvmf_subsystem_add_ns", 00:05:34.549 "nvmf_subsystem_listener_set_ana_state", 00:05:34.549 "nvmf_discovery_get_referrals", 00:05:34.549 "nvmf_discovery_remove_referral", 00:05:34.549 "nvmf_discovery_add_referral", 00:05:34.549 "nvmf_subsystem_remove_listener", 00:05:34.549 "nvmf_subsystem_add_listener", 00:05:34.549 "nvmf_delete_subsystem", 00:05:34.549 "nvmf_create_subsystem", 00:05:34.549 "nvmf_get_subsystems", 00:05:34.549 "env_dpdk_get_mem_stats", 00:05:34.549 "nbd_get_disks", 00:05:34.549 "nbd_stop_disk", 00:05:34.549 "nbd_start_disk", 00:05:34.549 "ublk_recover_disk", 00:05:34.549 "ublk_get_disks", 00:05:34.549 "ublk_stop_disk", 00:05:34.549 "ublk_start_disk", 00:05:34.549 "ublk_destroy_target", 00:05:34.549 "ublk_create_target", 00:05:34.549 "virtio_blk_create_transport", 00:05:34.549 "virtio_blk_get_transports", 00:05:34.549 "vhost_controller_set_coalescing", 00:05:34.549 "vhost_get_controllers", 00:05:34.549 "vhost_delete_controller", 00:05:34.549 "vhost_create_blk_controller", 00:05:34.549 "vhost_scsi_controller_remove_target", 00:05:34.549 "vhost_scsi_controller_add_target", 00:05:34.549 "vhost_start_scsi_controller", 00:05:34.549 "vhost_create_scsi_controller", 00:05:34.549 "thread_set_cpumask", 00:05:34.549 "scheduler_set_options", 00:05:34.549 "framework_get_governor", 00:05:34.549 "framework_get_scheduler", 00:05:34.549 "framework_set_scheduler", 00:05:34.549 "framework_get_reactors", 00:05:34.549 "thread_get_io_channels", 00:05:34.549 "thread_get_pollers", 00:05:34.549 "thread_get_stats", 00:05:34.549 "framework_monitor_context_switch", 00:05:34.549 "spdk_kill_instance", 00:05:34.549 "log_enable_timestamps", 00:05:34.549 "log_get_flags", 00:05:34.549 "log_clear_flag", 00:05:34.549 "log_set_flag", 00:05:34.549 "log_get_level", 00:05:34.549 "log_set_level", 00:05:34.549 "log_get_print_level", 00:05:34.549 "log_set_print_level", 00:05:34.549 "framework_enable_cpumask_locks", 00:05:34.549 "framework_disable_cpumask_locks", 00:05:34.549 "framework_wait_init", 00:05:34.549 "framework_start_init", 00:05:34.549 "scsi_get_devices", 00:05:34.549 "bdev_get_histogram", 00:05:34.549 "bdev_enable_histogram", 00:05:34.549 "bdev_set_qos_limit", 00:05:34.549 "bdev_set_qd_sampling_period", 00:05:34.549 "bdev_get_bdevs", 00:05:34.549 "bdev_reset_iostat", 00:05:34.549 "bdev_get_iostat", 00:05:34.549 "bdev_examine", 00:05:34.549 "bdev_wait_for_examine", 00:05:34.549 "bdev_set_options", 00:05:34.549 "accel_get_stats", 00:05:34.549 "accel_set_options", 00:05:34.549 "accel_set_driver", 00:05:34.549 "accel_crypto_key_destroy", 00:05:34.549 "accel_crypto_keys_get", 00:05:34.549 "accel_crypto_key_create", 00:05:34.549 "accel_assign_opc", 00:05:34.549 "accel_get_module_info", 00:05:34.549 "accel_get_opc_assignments", 00:05:34.549 "vmd_rescan", 00:05:34.549 "vmd_remove_device", 00:05:34.549 "vmd_enable", 00:05:34.549 "sock_get_default_impl", 00:05:34.549 "sock_set_default_impl", 00:05:34.549 "sock_impl_set_options", 00:05:34.549 "sock_impl_get_options", 00:05:34.549 "iobuf_get_stats", 00:05:34.549 "iobuf_set_options", 00:05:34.549 "keyring_get_keys", 00:05:34.549 "framework_get_pci_devices", 00:05:34.549 "framework_get_config", 00:05:34.549 "framework_get_subsystems", 00:05:34.549 "fsdev_set_opts", 00:05:34.549 "fsdev_get_opts", 00:05:34.549 "trace_get_info", 00:05:34.549 "trace_get_tpoint_group_mask", 00:05:34.549 "trace_disable_tpoint_group", 00:05:34.549 "trace_enable_tpoint_group", 00:05:34.549 "trace_clear_tpoint_mask", 00:05:34.549 "trace_set_tpoint_mask", 00:05:34.549 "notify_get_notifications", 00:05:34.549 "notify_get_types", 00:05:34.549 "spdk_get_version", 00:05:34.549 "rpc_get_methods" 00:05:34.549 ] 00:05:34.549 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.549 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:34.549 05:43:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58946 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58946 ']' 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58946 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:34.549 05:43:41 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58946 00:05:34.549 killing process with pid 58946 00:05:34.549 05:43:42 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:34.549 05:43:42 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:34.549 05:43:42 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58946' 00:05:34.549 05:43:42 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58946 00:05:34.549 05:43:42 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58946 00:05:37.086 00:05:37.086 real 0m4.051s 00:05:37.086 user 0m7.189s 00:05:37.086 sys 0m0.614s 00:05:37.086 05:43:44 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.086 ************************************ 00:05:37.086 END TEST spdkcli_tcp 00:05:37.086 ************************************ 00:05:37.086 05:43:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.086 05:43:44 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.086 05:43:44 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.086 05:43:44 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.086 05:43:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.086 ************************************ 00:05:37.086 START TEST dpdk_mem_utility 00:05:37.086 ************************************ 00:05:37.086 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:37.086 * Looking for test storage... 00:05:37.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:37.086 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:37.086 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:37.086 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:37.086 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:37.086 05:43:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.346 05:43:44 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:37.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.346 --rc genhtml_branch_coverage=1 00:05:37.346 --rc genhtml_function_coverage=1 00:05:37.346 --rc genhtml_legend=1 00:05:37.346 --rc geninfo_all_blocks=1 00:05:37.346 --rc geninfo_unexecuted_blocks=1 00:05:37.346 00:05:37.346 ' 00:05:37.346 05:43:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:37.346 05:43:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59069 00:05:37.346 05:43:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:37.346 05:43:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59069 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59069 ']' 00:05:37.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.346 05:43:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:37.346 [2024-12-12 05:43:44.724117] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:37.346 [2024-12-12 05:43:44.724237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59069 ] 00:05:37.606 [2024-12-12 05:43:44.896196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.606 [2024-12-12 05:43:45.002713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.549 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.549 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:38.549 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:38.549 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:38.549 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:38.549 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:38.549 { 00:05:38.549 "filename": "/tmp/spdk_mem_dump.txt" 00:05:38.549 } 00:05:38.549 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:38.549 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:38.549 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:38.549 1 heaps totaling size 824.000000 MiB 00:05:38.549 size: 824.000000 MiB heap id: 0 00:05:38.549 end heaps---------- 00:05:38.549 9 mempools totaling size 603.782043 MiB 00:05:38.549 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:38.549 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:38.549 size: 100.555481 MiB name: bdev_io_59069 00:05:38.549 size: 50.003479 MiB name: msgpool_59069 00:05:38.549 size: 36.509338 MiB name: fsdev_io_59069 00:05:38.549 size: 21.763794 MiB name: PDU_Pool 00:05:38.549 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:38.549 size: 4.133484 MiB name: evtpool_59069 00:05:38.549 size: 0.026123 MiB name: Session_Pool 00:05:38.549 end mempools------- 00:05:38.549 6 memzones totaling size 4.142822 MiB 00:05:38.549 size: 1.000366 MiB name: RG_ring_0_59069 00:05:38.549 size: 1.000366 MiB name: RG_ring_1_59069 00:05:38.549 size: 1.000366 MiB name: RG_ring_4_59069 00:05:38.549 size: 1.000366 MiB name: RG_ring_5_59069 00:05:38.549 size: 0.125366 MiB name: RG_ring_2_59069 00:05:38.549 size: 0.015991 MiB name: RG_ring_3_59069 00:05:38.549 end memzones------- 00:05:38.549 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:38.549 heap id: 0 total size: 824.000000 MiB number of busy elements: 321 number of free elements: 18 00:05:38.549 list of free elements. size: 16.779907 MiB 00:05:38.549 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:38.549 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:38.549 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:38.549 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:38.549 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:38.549 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:38.549 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:38.549 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:38.549 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:38.549 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:38.549 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:38.549 element at address: 0x20001b400000 with size: 0.561462 MiB 00:05:38.549 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:38.549 element at address: 0x200019600000 with size: 0.487976 MiB 00:05:38.549 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:38.549 element at address: 0x200012c00000 with size: 0.433228 MiB 00:05:38.549 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:38.549 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:38.549 list of standard malloc elements. size: 199.289185 MiB 00:05:38.549 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:38.549 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:38.549 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:38.549 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:38.549 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:38.549 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:38.549 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:38.549 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:38.549 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:38.549 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:38.549 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:38.549 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:38.549 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:38.549 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:38.549 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:38.550 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:38.550 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:38.551 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:38.551 list of memzone associated elements. size: 607.930908 MiB 00:05:38.551 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:38.551 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:38.551 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:38.551 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:38.551 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:38.551 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59069_0 00:05:38.551 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:38.551 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59069_0 00:05:38.551 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:38.551 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59069_0 00:05:38.551 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:38.551 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:38.551 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:38.551 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:38.551 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:38.551 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59069_0 00:05:38.551 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:38.551 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59069 00:05:38.551 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:38.551 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59069 00:05:38.551 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:38.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:38.551 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:38.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:38.551 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:38.551 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:38.551 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:38.551 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:38.551 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:38.551 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59069 00:05:38.551 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:38.551 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59069 00:05:38.551 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:38.551 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59069 00:05:38.551 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:38.551 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59069 00:05:38.551 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:38.551 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59069 00:05:38.551 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:38.551 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59069 00:05:38.551 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:38.551 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:38.551 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:38.551 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:38.551 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:38.551 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:38.551 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:38.551 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59069 00:05:38.551 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:38.551 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59069 00:05:38.551 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:38.551 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:38.551 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:38.551 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:38.551 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:38.551 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59069 00:05:38.551 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:38.551 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:38.551 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:38.551 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59069 00:05:38.551 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:38.551 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59069 00:05:38.551 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:38.551 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59069 00:05:38.551 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:38.551 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:38.551 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:38.551 05:43:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59069 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59069 ']' 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59069 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59069 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59069' 00:05:38.551 killing process with pid 59069 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59069 00:05:38.551 05:43:45 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59069 00:05:41.092 00:05:41.092 real 0m3.827s 00:05:41.092 user 0m3.719s 00:05:41.092 sys 0m0.564s 00:05:41.092 ************************************ 00:05:41.092 END TEST dpdk_mem_utility 00:05:41.092 ************************************ 00:05:41.092 05:43:48 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.092 05:43:48 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.092 05:43:48 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:41.092 05:43:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.092 05:43:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.092 05:43:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.092 ************************************ 00:05:41.092 START TEST event 00:05:41.092 ************************************ 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:41.092 * Looking for test storage... 00:05:41.092 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:41.092 05:43:48 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.092 05:43:48 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.092 05:43:48 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.092 05:43:48 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.092 05:43:48 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.092 05:43:48 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.092 05:43:48 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.092 05:43:48 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.092 05:43:48 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.092 05:43:48 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.092 05:43:48 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.092 05:43:48 event -- scripts/common.sh@344 -- # case "$op" in 00:05:41.092 05:43:48 event -- scripts/common.sh@345 -- # : 1 00:05:41.092 05:43:48 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.092 05:43:48 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.092 05:43:48 event -- scripts/common.sh@365 -- # decimal 1 00:05:41.092 05:43:48 event -- scripts/common.sh@353 -- # local d=1 00:05:41.092 05:43:48 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.092 05:43:48 event -- scripts/common.sh@355 -- # echo 1 00:05:41.092 05:43:48 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.092 05:43:48 event -- scripts/common.sh@366 -- # decimal 2 00:05:41.092 05:43:48 event -- scripts/common.sh@353 -- # local d=2 00:05:41.092 05:43:48 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.092 05:43:48 event -- scripts/common.sh@355 -- # echo 2 00:05:41.092 05:43:48 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.092 05:43:48 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.092 05:43:48 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.092 05:43:48 event -- scripts/common.sh@368 -- # return 0 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.092 --rc genhtml_branch_coverage=1 00:05:41.092 --rc genhtml_function_coverage=1 00:05:41.092 --rc genhtml_legend=1 00:05:41.092 --rc geninfo_all_blocks=1 00:05:41.092 --rc geninfo_unexecuted_blocks=1 00:05:41.092 00:05:41.092 ' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.092 --rc genhtml_branch_coverage=1 00:05:41.092 --rc genhtml_function_coverage=1 00:05:41.092 --rc genhtml_legend=1 00:05:41.092 --rc geninfo_all_blocks=1 00:05:41.092 --rc geninfo_unexecuted_blocks=1 00:05:41.092 00:05:41.092 ' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.092 --rc genhtml_branch_coverage=1 00:05:41.092 --rc genhtml_function_coverage=1 00:05:41.092 --rc genhtml_legend=1 00:05:41.092 --rc geninfo_all_blocks=1 00:05:41.092 --rc geninfo_unexecuted_blocks=1 00:05:41.092 00:05:41.092 ' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:41.092 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.092 --rc genhtml_branch_coverage=1 00:05:41.092 --rc genhtml_function_coverage=1 00:05:41.092 --rc genhtml_legend=1 00:05:41.092 --rc geninfo_all_blocks=1 00:05:41.092 --rc geninfo_unexecuted_blocks=1 00:05:41.092 00:05:41.092 ' 00:05:41.092 05:43:48 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:41.092 05:43:48 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:41.092 05:43:48 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:41.092 05:43:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.092 05:43:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.092 ************************************ 00:05:41.092 START TEST event_perf 00:05:41.092 ************************************ 00:05:41.092 05:43:48 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:41.092 Running I/O for 1 seconds...[2024-12-12 05:43:48.582100] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:41.092 [2024-12-12 05:43:48.582243] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59177 ] 00:05:41.352 [2024-12-12 05:43:48.756630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:41.352 [2024-12-12 05:43:48.869105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.352 [2024-12-12 05:43:48.869279] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.352 [2024-12-12 05:43:48.869359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.352 [2024-12-12 05:43:48.869395] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:42.733 Running I/O for 1 seconds... 00:05:42.733 lcore 0: 207138 00:05:42.733 lcore 1: 207138 00:05:42.733 lcore 2: 207138 00:05:42.733 lcore 3: 207137 00:05:42.733 done. 00:05:42.733 00:05:42.733 real 0m1.569s 00:05:42.733 user 0m4.337s 00:05:42.733 sys 0m0.114s 00:05:42.733 05:43:50 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.733 05:43:50 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.733 ************************************ 00:05:42.733 END TEST event_perf 00:05:42.733 ************************************ 00:05:42.733 05:43:50 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.733 05:43:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:42.733 05:43:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.733 05:43:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.733 ************************************ 00:05:42.733 START TEST event_reactor 00:05:42.733 ************************************ 00:05:42.733 05:43:50 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:42.733 [2024-12-12 05:43:50.210692] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:42.733 [2024-12-12 05:43:50.210838] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:05:42.993 [2024-12-12 05:43:50.369171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.993 [2024-12-12 05:43:50.482124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.373 test_start 00:05:44.373 oneshot 00:05:44.373 tick 100 00:05:44.373 tick 100 00:05:44.373 tick 250 00:05:44.373 tick 100 00:05:44.373 tick 100 00:05:44.373 tick 100 00:05:44.373 tick 250 00:05:44.373 tick 500 00:05:44.373 tick 100 00:05:44.373 tick 100 00:05:44.373 tick 250 00:05:44.373 tick 100 00:05:44.373 tick 100 00:05:44.373 test_end 00:05:44.373 ************************************ 00:05:44.373 END TEST event_reactor 00:05:44.373 ************************************ 00:05:44.373 00:05:44.373 real 0m1.532s 00:05:44.373 user 0m1.334s 00:05:44.373 sys 0m0.089s 00:05:44.373 05:43:51 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.373 05:43:51 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:44.373 05:43:51 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.373 05:43:51 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:44.373 05:43:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.373 05:43:51 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.373 ************************************ 00:05:44.373 START TEST event_reactor_perf 00:05:44.373 ************************************ 00:05:44.373 05:43:51 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:44.373 [2024-12-12 05:43:51.805731] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:44.373 [2024-12-12 05:43:51.805876] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:05:44.632 [2024-12-12 05:43:51.979362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.632 [2024-12-12 05:43:52.090229] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.013 test_start 00:05:46.013 test_end 00:05:46.013 Performance: 405258 events per second 00:05:46.013 00:05:46.013 real 0m1.548s 00:05:46.013 user 0m1.340s 00:05:46.013 sys 0m0.099s 00:05:46.013 05:43:53 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.013 05:43:53 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 ************************************ 00:05:46.013 END TEST event_reactor_perf 00:05:46.013 ************************************ 00:05:46.013 05:43:53 event -- event/event.sh@49 -- # uname -s 00:05:46.013 05:43:53 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:46.013 05:43:53 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.013 05:43:53 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.013 05:43:53 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.013 05:43:53 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.013 ************************************ 00:05:46.013 START TEST event_scheduler 00:05:46.013 ************************************ 00:05:46.013 05:43:53 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:46.013 * Looking for test storage... 00:05:46.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:46.013 05:43:53 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:46.013 05:43:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:46.013 05:43:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:46.273 05:43:53 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.273 05:43:53 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:46.273 05:43:53 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.273 05:43:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:46.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.273 --rc genhtml_branch_coverage=1 00:05:46.273 --rc genhtml_function_coverage=1 00:05:46.273 --rc genhtml_legend=1 00:05:46.273 --rc geninfo_all_blocks=1 00:05:46.273 --rc geninfo_unexecuted_blocks=1 00:05:46.273 00:05:46.273 ' 00:05:46.273 05:43:53 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:46.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.273 --rc genhtml_branch_coverage=1 00:05:46.273 --rc genhtml_function_coverage=1 00:05:46.273 --rc genhtml_legend=1 00:05:46.273 --rc geninfo_all_blocks=1 00:05:46.274 --rc geninfo_unexecuted_blocks=1 00:05:46.274 00:05:46.274 ' 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.274 --rc genhtml_branch_coverage=1 00:05:46.274 --rc genhtml_function_coverage=1 00:05:46.274 --rc genhtml_legend=1 00:05:46.274 --rc geninfo_all_blocks=1 00:05:46.274 --rc geninfo_unexecuted_blocks=1 00:05:46.274 00:05:46.274 ' 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.274 --rc genhtml_branch_coverage=1 00:05:46.274 --rc genhtml_function_coverage=1 00:05:46.274 --rc genhtml_legend=1 00:05:46.274 --rc geninfo_all_blocks=1 00:05:46.274 --rc geninfo_unexecuted_blocks=1 00:05:46.274 00:05:46.274 ' 00:05:46.274 05:43:53 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:46.274 05:43:53 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59329 00:05:46.274 05:43:53 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:46.274 05:43:53 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:46.274 05:43:53 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59329 00:05:46.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59329 ']' 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.274 05:43:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:46.274 [2024-12-12 05:43:53.691327] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:46.274 [2024-12-12 05:43:53.691445] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:05:46.533 [2024-12-12 05:43:53.852121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.533 [2024-12-12 05:43:53.966807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.533 [2024-12-12 05:43:53.966981] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.533 [2024-12-12 05:43:53.967112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.533 [2024-12-12 05:43:53.967149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:47.102 05:43:54 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.102 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.102 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.102 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.102 POWER: Cannot set governor of lcore 0 to performance 00:05:47.102 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.102 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.102 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:47.102 POWER: Cannot set governor of lcore 0 to userspace 00:05:47.102 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:47.102 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:47.102 POWER: Unable to set Power Management Environment for lcore 0 00:05:47.102 [2024-12-12 05:43:54.515723] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:47.102 [2024-12-12 05:43:54.515765] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:47.102 [2024-12-12 05:43:54.515797] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:47.102 [2024-12-12 05:43:54.515836] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:47.102 [2024-12-12 05:43:54.515865] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:47.102 [2024-12-12 05:43:54.515892] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.102 05:43:54 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.102 05:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 [2024-12-12 05:43:54.829814] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:47.362 05:43:54 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.362 05:43:54 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:47.362 05:43:54 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.362 05:43:54 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.362 05:43:54 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 ************************************ 00:05:47.362 START TEST scheduler_create_thread 00:05:47.362 ************************************ 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 2 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.362 3 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.362 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 4 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 5 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 6 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 7 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 8 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 9 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:47.622 10 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.622 05:43:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.022 05:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.022 05:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.022 05:43:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.022 05:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.022 05:43:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.592 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.592 05:43:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:49.592 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.592 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.531 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.531 05:43:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:50.531 05:43:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:50.531 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.531 05:43:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.470 ************************************ 00:05:51.470 END TEST scheduler_create_thread 00:05:51.470 ************************************ 00:05:51.470 05:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.470 00:05:51.470 real 0m3.883s 00:05:51.470 user 0m0.027s 00:05:51.470 sys 0m0.010s 00:05:51.470 05:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.470 05:43:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.471 05:43:58 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:51.471 05:43:58 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59329 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59329 ']' 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59329 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59329 00:05:51.471 killing process with pid 59329 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59329' 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59329 00:05:51.471 05:43:58 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59329 00:05:51.730 [2024-12-12 05:43:59.106614] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.110 00:05:53.110 real 0m6.861s 00:05:53.110 user 0m14.220s 00:05:53.110 sys 0m0.498s 00:05:53.110 05:44:00 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.110 05:44:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.110 ************************************ 00:05:53.110 END TEST event_scheduler 00:05:53.110 ************************************ 00:05:53.110 05:44:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.110 05:44:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.110 05:44:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.110 05:44:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.110 05:44:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.110 ************************************ 00:05:53.110 START TEST app_repeat 00:05:53.111 ************************************ 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59446 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.111 Process app_repeat pid: 59446 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59446' 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.111 spdk_app_start Round 0 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.111 05:44:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59446 /var/tmp/spdk-nbd.sock 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.111 05:44:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.111 [2024-12-12 05:44:00.379101] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:05:53.111 [2024-12-12 05:44:00.379221] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59446 ] 00:05:53.111 [2024-12-12 05:44:00.534320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.370 [2024-12-12 05:44:00.644826] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.370 [2024-12-12 05:44:00.644849] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.938 05:44:01 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.938 05:44:01 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.938 05:44:01 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.198 Malloc0 00:05:54.198 05:44:01 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.458 Malloc1 00:05:54.458 05:44:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.458 05:44:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.458 /dev/nbd0 00:05:54.720 05:44:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.720 05:44:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.720 05:44:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.720 05:44:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.720 05:44:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.720 05:44:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.720 05:44:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.720 1+0 records in 00:05:54.720 1+0 records out 00:05:54.720 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034722 s, 11.8 MB/s 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.720 05:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.720 05:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.720 05:44:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.720 /dev/nbd1 00:05:54.720 05:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.720 05:44:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.720 05:44:02 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.981 1+0 records in 00:05:54.981 1+0 records out 00:05:54.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364492 s, 11.2 MB/s 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.981 05:44:02 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.982 { 00:05:54.982 "nbd_device": "/dev/nbd0", 00:05:54.982 "bdev_name": "Malloc0" 00:05:54.982 }, 00:05:54.982 { 00:05:54.982 "nbd_device": "/dev/nbd1", 00:05:54.982 "bdev_name": "Malloc1" 00:05:54.982 } 00:05:54.982 ]' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.982 { 00:05:54.982 "nbd_device": "/dev/nbd0", 00:05:54.982 "bdev_name": "Malloc0" 00:05:54.982 }, 00:05:54.982 { 00:05:54.982 "nbd_device": "/dev/nbd1", 00:05:54.982 "bdev_name": "Malloc1" 00:05:54.982 } 00:05:54.982 ]' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.982 /dev/nbd1' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.982 /dev/nbd1' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.982 05:44:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:55.242 256+0 records in 00:05:55.242 256+0 records out 00:05:55.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135777 s, 77.2 MB/s 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:55.242 256+0 records in 00:05:55.242 256+0 records out 00:05:55.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210549 s, 49.8 MB/s 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:55.242 256+0 records in 00:05:55.242 256+0 records out 00:05:55.242 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02338 s, 44.8 MB/s 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.242 05:44:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.502 05:44:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.502 05:44:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.761 05:44:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.761 05:44:03 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:56.331 05:44:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.271 [2024-12-12 05:44:04.751646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.530 [2024-12-12 05:44:04.852680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.530 [2024-12-12 05:44:04.852682] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.530 [2024-12-12 05:44:05.039800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.530 [2024-12-12 05:44:05.039905] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.438 05:44:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.438 05:44:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.438 spdk_app_start Round 1 00:05:59.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.438 05:44:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59446 /var/tmp/spdk-nbd.sock 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.438 05:44:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.438 05:44:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.697 Malloc0 00:05:59.697 05:44:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.957 Malloc1 00:05:59.957 05:44:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.957 05:44:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.217 /dev/nbd0 00:06:00.217 05:44:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.217 05:44:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.217 1+0 records in 00:06:00.217 1+0 records out 00:06:00.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024424 s, 16.8 MB/s 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.217 05:44:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.217 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.217 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.217 05:44:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:00.477 /dev/nbd1 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.477 1+0 records in 00:06:00.477 1+0 records out 00:06:00.477 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421143 s, 9.7 MB/s 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.477 05:44:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.477 05:44:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.737 { 00:06:00.737 "nbd_device": "/dev/nbd0", 00:06:00.737 "bdev_name": "Malloc0" 00:06:00.737 }, 00:06:00.737 { 00:06:00.737 "nbd_device": "/dev/nbd1", 00:06:00.737 "bdev_name": "Malloc1" 00:06:00.737 } 00:06:00.737 ]' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.737 { 00:06:00.737 "nbd_device": "/dev/nbd0", 00:06:00.737 "bdev_name": "Malloc0" 00:06:00.737 }, 00:06:00.737 { 00:06:00.737 "nbd_device": "/dev/nbd1", 00:06:00.737 "bdev_name": "Malloc1" 00:06:00.737 } 00:06:00.737 ]' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.737 /dev/nbd1' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.737 /dev/nbd1' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.737 256+0 records in 00:06:00.737 256+0 records out 00:06:00.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143168 s, 73.2 MB/s 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.737 256+0 records in 00:06:00.737 256+0 records out 00:06:00.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175391 s, 59.8 MB/s 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.737 256+0 records in 00:06:00.737 256+0 records out 00:06:00.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302042 s, 34.7 MB/s 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.737 05:44:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.997 05:44:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.997 05:44:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.997 05:44:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.997 05:44:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.998 05:44:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.257 05:44:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:01.517 05:44:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:01.517 05:44:08 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.782 05:44:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.167 [2024-12-12 05:44:10.370312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:03.167 [2024-12-12 05:44:10.470657] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.167 [2024-12-12 05:44:10.470683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:03.167 [2024-12-12 05:44:10.653388] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:03.167 [2024-12-12 05:44:10.653484] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.076 spdk_app_start Round 2 00:06:05.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.076 05:44:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.076 05:44:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:05.076 05:44:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59446 /var/tmp/spdk-nbd.sock 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.076 05:44:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:05.076 05:44:12 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.335 Malloc0 00:06:05.335 05:44:12 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.595 Malloc1 00:06:05.595 05:44:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.595 05:44:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.855 /dev/nbd0 00:06:05.855 05:44:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.855 05:44:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.855 1+0 records in 00:06:05.855 1+0 records out 00:06:05.855 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00016925 s, 24.2 MB/s 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.855 05:44:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.855 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.855 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.855 05:44:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.115 /dev/nbd1 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.115 1+0 records in 00:06:06.115 1+0 records out 00:06:06.115 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385413 s, 10.6 MB/s 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:06.115 05:44:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.115 05:44:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.375 { 00:06:06.375 "nbd_device": "/dev/nbd0", 00:06:06.375 "bdev_name": "Malloc0" 00:06:06.375 }, 00:06:06.375 { 00:06:06.375 "nbd_device": "/dev/nbd1", 00:06:06.375 "bdev_name": "Malloc1" 00:06:06.375 } 00:06:06.375 ]' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.375 { 00:06:06.375 "nbd_device": "/dev/nbd0", 00:06:06.375 "bdev_name": "Malloc0" 00:06:06.375 }, 00:06:06.375 { 00:06:06.375 "nbd_device": "/dev/nbd1", 00:06:06.375 "bdev_name": "Malloc1" 00:06:06.375 } 00:06:06.375 ]' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.375 /dev/nbd1' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.375 /dev/nbd1' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.375 256+0 records in 00:06:06.375 256+0 records out 00:06:06.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138882 s, 75.5 MB/s 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.375 256+0 records in 00:06:06.375 256+0 records out 00:06:06.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0210688 s, 49.8 MB/s 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.375 256+0 records in 00:06:06.375 256+0 records out 00:06:06.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253843 s, 41.3 MB/s 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.375 05:44:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.635 05:44:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.894 05:44:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.154 05:44:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.154 05:44:14 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.413 05:44:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:08.794 [2024-12-12 05:44:16.013518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.794 [2024-12-12 05:44:16.115886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.794 [2024-12-12 05:44:16.115887] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.794 [2024-12-12 05:44:16.301332] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.794 [2024-12-12 05:44:16.301398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.705 05:44:17 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59446 /var/tmp/spdk-nbd.sock 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59446 ']' 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.705 05:44:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.705 05:44:18 event.app_repeat -- event/event.sh@39 -- # killprocess 59446 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59446 ']' 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59446 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59446 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59446' 00:06:10.705 killing process with pid 59446 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59446 00:06:10.705 05:44:18 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59446 00:06:11.645 spdk_app_start is called in Round 0. 00:06:11.645 Shutdown signal received, stop current app iteration 00:06:11.645 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:11.645 spdk_app_start is called in Round 1. 00:06:11.645 Shutdown signal received, stop current app iteration 00:06:11.645 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:11.645 spdk_app_start is called in Round 2. 00:06:11.645 Shutdown signal received, stop current app iteration 00:06:11.645 Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 reinitialization... 00:06:11.645 spdk_app_start is called in Round 3. 00:06:11.645 Shutdown signal received, stop current app iteration 00:06:11.905 05:44:19 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:11.905 05:44:19 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:11.905 00:06:11.905 real 0m18.876s 00:06:11.905 user 0m40.416s 00:06:11.905 sys 0m2.650s 00:06:11.905 05:44:19 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.905 05:44:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 ************************************ 00:06:11.905 END TEST app_repeat 00:06:11.905 ************************************ 00:06:11.905 05:44:19 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:11.905 05:44:19 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.905 05:44:19 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.905 05:44:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.905 05:44:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:11.905 ************************************ 00:06:11.905 START TEST cpu_locks 00:06:11.905 ************************************ 00:06:11.905 05:44:19 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:11.905 * Looking for test storage... 00:06:11.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:11.905 05:44:19 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:11.905 05:44:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:11.905 05:44:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.165 05:44:19 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.165 --rc genhtml_branch_coverage=1 00:06:12.165 --rc genhtml_function_coverage=1 00:06:12.165 --rc genhtml_legend=1 00:06:12.165 --rc geninfo_all_blocks=1 00:06:12.165 --rc geninfo_unexecuted_blocks=1 00:06:12.165 00:06:12.165 ' 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.165 --rc genhtml_branch_coverage=1 00:06:12.165 --rc genhtml_function_coverage=1 00:06:12.165 --rc genhtml_legend=1 00:06:12.165 --rc geninfo_all_blocks=1 00:06:12.165 --rc geninfo_unexecuted_blocks=1 00:06:12.165 00:06:12.165 ' 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.165 --rc genhtml_branch_coverage=1 00:06:12.165 --rc genhtml_function_coverage=1 00:06:12.165 --rc genhtml_legend=1 00:06:12.165 --rc geninfo_all_blocks=1 00:06:12.165 --rc geninfo_unexecuted_blocks=1 00:06:12.165 00:06:12.165 ' 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.165 --rc genhtml_branch_coverage=1 00:06:12.165 --rc genhtml_function_coverage=1 00:06:12.165 --rc genhtml_legend=1 00:06:12.165 --rc geninfo_all_blocks=1 00:06:12.165 --rc geninfo_unexecuted_blocks=1 00:06:12.165 00:06:12.165 ' 00:06:12.165 05:44:19 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:12.165 05:44:19 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:12.165 05:44:19 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:12.165 05:44:19 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:12.165 05:44:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.166 05:44:19 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.166 05:44:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.166 ************************************ 00:06:12.166 START TEST default_locks 00:06:12.166 ************************************ 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59888 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59888 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59888 ']' 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.166 05:44:19 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.166 [2024-12-12 05:44:19.606275] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:12.166 [2024-12-12 05:44:19.606478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59888 ] 00:06:12.426 [2024-12-12 05:44:19.769917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.426 [2024-12-12 05:44:19.876005] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.363 05:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.363 05:44:20 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:13.363 05:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59888 00:06:13.363 05:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59888 00:06:13.363 05:44:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59888 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59888 ']' 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59888 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59888 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59888' 00:06:13.932 killing process with pid 59888 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59888 00:06:13.932 05:44:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59888 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59888 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59888 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:16.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59888 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59888 ']' 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.470 ERROR: process (pid: 59888) is no longer running 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59888) - No such process 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.470 00:06:16.470 real 0m3.960s 00:06:16.470 user 0m3.912s 00:06:16.470 sys 0m0.647s 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.470 ************************************ 00:06:16.470 END TEST default_locks 00:06:16.470 ************************************ 00:06:16.470 05:44:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 05:44:23 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.470 05:44:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.470 05:44:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.470 05:44:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.470 ************************************ 00:06:16.470 START TEST default_locks_via_rpc 00:06:16.470 ************************************ 00:06:16.470 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59963 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59963 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59963 ']' 00:06:16.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.471 05:44:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.471 [2024-12-12 05:44:23.635583] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:16.471 [2024-12-12 05:44:23.635697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59963 ] 00:06:16.471 [2024-12-12 05:44:23.806302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.471 [2024-12-12 05:44:23.921233] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.409 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.409 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:17.409 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.409 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59963 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59963 00:06:17.410 05:44:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59963 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59963 ']' 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59963 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59963 00:06:17.982 killing process with pid 59963 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59963' 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59963 00:06:17.982 05:44:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59963 00:06:20.536 00:06:20.536 real 0m4.005s 00:06:20.536 user 0m3.958s 00:06:20.536 sys 0m0.662s 00:06:20.536 05:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.536 ************************************ 00:06:20.536 END TEST default_locks_via_rpc 00:06:20.536 ************************************ 00:06:20.536 05:44:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:20.536 05:44:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:20.536 05:44:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.536 05:44:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.536 05:44:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.536 ************************************ 00:06:20.536 START TEST non_locking_app_on_locked_coremask 00:06:20.536 ************************************ 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60037 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60037 /var/tmp/spdk.sock 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60037 ']' 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.536 05:44:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.536 [2024-12-12 05:44:27.710694] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:20.537 [2024-12-12 05:44:27.710909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60037 ] 00:06:20.537 [2024-12-12 05:44:27.881934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.537 [2024-12-12 05:44:27.991308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60053 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60053 /var/tmp/spdk2.sock 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60053 ']' 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.476 05:44:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.476 [2024-12-12 05:44:28.906973] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:21.476 [2024-12-12 05:44:28.907165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60053 ] 00:06:21.736 [2024-12-12 05:44:29.071248] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.736 [2024-12-12 05:44:29.071295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.996 [2024-12-12 05:44:29.298532] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60037 ']' 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60037 00:06:24.535 killing process with pid 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60037' 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60037 00:06:24.535 05:44:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60037 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60053 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60053 ']' 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60053 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60053 00:06:29.812 killing process with pid 60053 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60053' 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60053 00:06:29.812 05:44:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60053 00:06:31.193 ************************************ 00:06:31.193 END TEST non_locking_app_on_locked_coremask 00:06:31.193 ************************************ 00:06:31.193 00:06:31.193 real 0m11.076s 00:06:31.193 user 0m11.302s 00:06:31.193 sys 0m1.138s 00:06:31.193 05:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:31.193 05:44:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 05:44:38 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:31.453 05:44:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:31.453 05:44:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:31.453 05:44:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 ************************************ 00:06:31.453 START TEST locking_app_on_unlocked_coremask 00:06:31.453 ************************************ 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60194 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60194 /var/tmp/spdk.sock 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60194 ']' 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.453 05:44:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.453 [2024-12-12 05:44:38.849047] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:31.453 [2024-12-12 05:44:38.849149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60194 ] 00:06:31.714 [2024-12-12 05:44:39.017714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.714 [2024-12-12 05:44:39.017764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.714 [2024-12-12 05:44:39.130579] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60217 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60217 /var/tmp/spdk2.sock 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60217 ']' 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:32.654 05:44:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.654 [2024-12-12 05:44:40.077582] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:32.654 [2024-12-12 05:44:40.077785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60217 ] 00:06:32.913 [2024-12-12 05:44:40.243134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.173 [2024-12-12 05:44:40.462358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.142 05:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.142 05:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:35.142 05:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60217 00:06:35.142 05:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.142 05:44:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60217 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60194 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60194 ']' 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60194 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60194 00:06:35.709 killing process with pid 60194 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60194' 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60194 00:06:35.709 05:44:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60194 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60217 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60217 ']' 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60217 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60217 00:06:40.987 killing process with pid 60217 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60217' 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60217 00:06:40.987 05:44:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60217 00:06:42.895 ************************************ 00:06:42.895 END TEST locking_app_on_unlocked_coremask 00:06:42.895 ************************************ 00:06:42.895 00:06:42.895 real 0m11.375s 00:06:42.895 user 0m11.642s 00:06:42.895 sys 0m1.187s 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 05:44:50 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:42.895 05:44:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.895 05:44:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.895 05:44:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 ************************************ 00:06:42.895 START TEST locking_app_on_locked_coremask 00:06:42.895 ************************************ 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60363 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60363 /var/tmp/spdk.sock 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60363 ']' 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.895 05:44:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.895 [2024-12-12 05:44:50.290179] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:42.895 [2024-12-12 05:44:50.290377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60363 ] 00:06:43.154 [2024-12-12 05:44:50.462188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.154 [2024-12-12 05:44:50.567320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60379 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60379 /var/tmp/spdk2.sock 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60379 /var/tmp/spdk2.sock 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:44.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60379 /var/tmp/spdk2.sock 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60379 ']' 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.098 05:44:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.098 [2024-12-12 05:44:51.452602] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:44.098 [2024-12-12 05:44:51.452817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60379 ] 00:06:44.098 [2024-12-12 05:44:51.616683] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60363 has claimed it. 00:06:44.098 [2024-12-12 05:44:51.616755] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:44.667 ERROR: process (pid: 60379) is no longer running 00:06:44.667 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60379) - No such process 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60363 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60363 00:06:44.667 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.235 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60363 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60363 ']' 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60363 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60363 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60363' 00:06:45.236 killing process with pid 60363 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60363 00:06:45.236 05:44:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60363 00:06:47.775 00:06:47.775 real 0m4.607s 00:06:47.775 user 0m4.741s 00:06:47.776 sys 0m0.780s 00:06:47.776 05:44:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.776 05:44:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.776 ************************************ 00:06:47.776 END TEST locking_app_on_locked_coremask 00:06:47.776 ************************************ 00:06:47.776 05:44:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.776 05:44:54 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.776 05:44:54 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.776 05:44:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.776 ************************************ 00:06:47.776 START TEST locking_overlapped_coremask 00:06:47.776 ************************************ 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60449 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60449 /var/tmp/spdk.sock 00:06:47.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60449 ']' 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:47.776 05:44:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.776 [2024-12-12 05:44:54.969777] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:47.776 [2024-12-12 05:44:54.969905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60449 ] 00:06:47.776 [2024-12-12 05:44:55.145119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.776 [2024-12-12 05:44:55.258610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.776 [2024-12-12 05:44:55.258673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.776 [2024-12-12 05:44:55.258719] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60468 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60468 /var/tmp/spdk2.sock 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60468 /var/tmp/spdk2.sock 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60468 /var/tmp/spdk2.sock 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60468 ']' 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.715 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.715 [2024-12-12 05:44:56.187170] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:48.715 [2024-12-12 05:44:56.187738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60468 ] 00:06:48.974 [2024-12-12 05:44:56.359258] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60449 has claimed it. 00:06:48.974 [2024-12-12 05:44:56.359310] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:49.543 ERROR: process (pid: 60468) is no longer running 00:06:49.543 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60468) - No such process 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60449 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60449 ']' 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60449 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60449 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60449' 00:06:49.543 killing process with pid 60449 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60449 00:06:49.543 05:44:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60449 00:06:52.083 00:06:52.083 real 0m4.336s 00:06:52.083 user 0m11.761s 00:06:52.083 sys 0m0.580s 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.083 ************************************ 00:06:52.083 END TEST locking_overlapped_coremask 00:06:52.083 ************************************ 00:06:52.083 05:44:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.083 05:44:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.083 05:44:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.083 05:44:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.083 ************************************ 00:06:52.083 START TEST locking_overlapped_coremask_via_rpc 00:06:52.083 ************************************ 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60532 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60532 /var/tmp/spdk.sock 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60532 ']' 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.083 05:44:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.083 [2024-12-12 05:44:59.368782] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:52.083 [2024-12-12 05:44:59.368917] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60532 ] 00:06:52.083 [2024-12-12 05:44:59.543055] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.083 [2024-12-12 05:44:59.543106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.343 [2024-12-12 05:44:59.660730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.343 [2024-12-12 05:44:59.660871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.343 [2024-12-12 05:44:59.660909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60556 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60556 /var/tmp/spdk2.sock 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60556 ']' 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.323 05:45:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.323 [2024-12-12 05:45:00.608692] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:06:53.323 [2024-12-12 05:45:00.608890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60556 ] 00:06:53.323 [2024-12-12 05:45:00.775340] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.323 [2024-12-12 05:45:00.775416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.582 [2024-12-12 05:45:01.013864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.582 [2024-12-12 05:45:01.017612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.582 [2024-12-12 05:45:01.017650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.121 [2024-12-12 05:45:03.177683] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60532 has claimed it. 00:06:56.121 request: 00:06:56.121 { 00:06:56.121 "method": "framework_enable_cpumask_locks", 00:06:56.121 "req_id": 1 00:06:56.121 } 00:06:56.121 Got JSON-RPC error response 00:06:56.121 response: 00:06:56.121 { 00:06:56.121 "code": -32603, 00:06:56.121 "message": "Failed to claim CPU core: 2" 00:06:56.121 } 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:56.121 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60532 /var/tmp/spdk.sock 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60532 ']' 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60556 /var/tmp/spdk2.sock 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60556 ']' 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:56.122 00:06:56.122 real 0m4.362s 00:06:56.122 user 0m1.272s 00:06:56.122 sys 0m0.201s 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.122 05:45:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.122 ************************************ 00:06:56.122 END TEST locking_overlapped_coremask_via_rpc 00:06:56.122 ************************************ 00:06:56.381 05:45:03 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:56.381 05:45:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60532 ]] 00:06:56.381 05:45:03 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60532 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60532 ']' 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60532 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60532 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60532' 00:06:56.381 killing process with pid 60532 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60532 00:06:56.381 05:45:03 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60532 00:06:58.920 05:45:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60556 ]] 00:06:58.920 05:45:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60556 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60556 ']' 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60556 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60556 00:06:58.920 killing process with pid 60556 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60556' 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60556 00:06:58.920 05:45:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60556 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60532 ]] 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60532 00:07:01.459 Process with pid 60532 is not found 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60532 ']' 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60532 00:07:01.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60532) - No such process 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60532 is not found' 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60556 ]] 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60556 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60556 ']' 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60556 00:07:01.459 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60556) - No such process 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60556 is not found' 00:07:01.459 Process with pid 60556 is not found 00:07:01.459 05:45:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:01.459 00:07:01.459 real 0m49.300s 00:07:01.459 user 1m24.584s 00:07:01.459 sys 0m6.349s 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.459 05:45:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.459 ************************************ 00:07:01.459 END TEST cpu_locks 00:07:01.459 ************************************ 00:07:01.459 ************************************ 00:07:01.459 END TEST event 00:07:01.459 ************************************ 00:07:01.459 00:07:01.459 real 1m20.327s 00:07:01.459 user 2m26.495s 00:07:01.459 sys 0m10.177s 00:07:01.459 05:45:08 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.459 05:45:08 event -- common/autotest_common.sh@10 -- # set +x 00:07:01.459 05:45:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:01.459 05:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.459 05:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.459 05:45:08 -- common/autotest_common.sh@10 -- # set +x 00:07:01.459 ************************************ 00:07:01.459 START TEST thread 00:07:01.459 ************************************ 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:01.459 * Looking for test storage... 00:07:01.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:01.459 05:45:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.459 05:45:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.459 05:45:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.459 05:45:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.459 05:45:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.459 05:45:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.459 05:45:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.459 05:45:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.459 05:45:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.459 05:45:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.459 05:45:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.459 05:45:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:01.459 05:45:08 thread -- scripts/common.sh@345 -- # : 1 00:07:01.459 05:45:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.459 05:45:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.459 05:45:08 thread -- scripts/common.sh@365 -- # decimal 1 00:07:01.459 05:45:08 thread -- scripts/common.sh@353 -- # local d=1 00:07:01.459 05:45:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.459 05:45:08 thread -- scripts/common.sh@355 -- # echo 1 00:07:01.459 05:45:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.459 05:45:08 thread -- scripts/common.sh@366 -- # decimal 2 00:07:01.459 05:45:08 thread -- scripts/common.sh@353 -- # local d=2 00:07:01.459 05:45:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.459 05:45:08 thread -- scripts/common.sh@355 -- # echo 2 00:07:01.459 05:45:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.459 05:45:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.459 05:45:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.459 05:45:08 thread -- scripts/common.sh@368 -- # return 0 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.459 --rc genhtml_branch_coverage=1 00:07:01.459 --rc genhtml_function_coverage=1 00:07:01.459 --rc genhtml_legend=1 00:07:01.459 --rc geninfo_all_blocks=1 00:07:01.459 --rc geninfo_unexecuted_blocks=1 00:07:01.459 00:07:01.459 ' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.459 --rc genhtml_branch_coverage=1 00:07:01.459 --rc genhtml_function_coverage=1 00:07:01.459 --rc genhtml_legend=1 00:07:01.459 --rc geninfo_all_blocks=1 00:07:01.459 --rc geninfo_unexecuted_blocks=1 00:07:01.459 00:07:01.459 ' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.459 --rc genhtml_branch_coverage=1 00:07:01.459 --rc genhtml_function_coverage=1 00:07:01.459 --rc genhtml_legend=1 00:07:01.459 --rc geninfo_all_blocks=1 00:07:01.459 --rc geninfo_unexecuted_blocks=1 00:07:01.459 00:07:01.459 ' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:01.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.459 --rc genhtml_branch_coverage=1 00:07:01.459 --rc genhtml_function_coverage=1 00:07:01.459 --rc genhtml_legend=1 00:07:01.459 --rc geninfo_all_blocks=1 00:07:01.459 --rc geninfo_unexecuted_blocks=1 00:07:01.459 00:07:01.459 ' 00:07:01.459 05:45:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.459 05:45:08 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.459 ************************************ 00:07:01.459 START TEST thread_poller_perf 00:07:01.459 ************************************ 00:07:01.459 05:45:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:01.459 [2024-12-12 05:45:08.972772] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:01.459 [2024-12-12 05:45:08.972940] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60751 ] 00:07:01.720 [2024-12-12 05:45:09.145723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.980 [2024-12-12 05:45:09.255478] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.980 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:03.359 [2024-12-12T05:45:10.881Z] ====================================== 00:07:03.359 [2024-12-12T05:45:10.881Z] busy:2300648482 (cyc) 00:07:03.359 [2024-12-12T05:45:10.881Z] total_run_count: 413000 00:07:03.359 [2024-12-12T05:45:10.881Z] tsc_hz: 2290000000 (cyc) 00:07:03.359 [2024-12-12T05:45:10.881Z] ====================================== 00:07:03.359 [2024-12-12T05:45:10.881Z] poller_cost: 5570 (cyc), 2432 (nsec) 00:07:03.359 00:07:03.359 real 0m1.557s 00:07:03.359 user 0m1.353s 00:07:03.359 sys 0m0.097s 00:07:03.359 05:45:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.359 05:45:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.359 ************************************ 00:07:03.359 END TEST thread_poller_perf 00:07:03.359 ************************************ 00:07:03.359 05:45:10 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.359 05:45:10 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:03.359 05:45:10 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.359 05:45:10 thread -- common/autotest_common.sh@10 -- # set +x 00:07:03.359 ************************************ 00:07:03.359 START TEST thread_poller_perf 00:07:03.359 ************************************ 00:07:03.359 05:45:10 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:03.359 [2024-12-12 05:45:10.592336] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:03.359 [2024-12-12 05:45:10.592434] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60782 ] 00:07:03.359 [2024-12-12 05:45:10.766898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.359 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:03.359 [2024-12-12 05:45:10.878542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.798 [2024-12-12T05:45:12.320Z] ====================================== 00:07:04.798 [2024-12-12T05:45:12.320Z] busy:2293011156 (cyc) 00:07:04.798 [2024-12-12T05:45:12.320Z] total_run_count: 5003000 00:07:04.798 [2024-12-12T05:45:12.320Z] tsc_hz: 2290000000 (cyc) 00:07:04.798 [2024-12-12T05:45:12.320Z] ====================================== 00:07:04.798 [2024-12-12T05:45:12.320Z] poller_cost: 458 (cyc), 200 (nsec) 00:07:04.798 00:07:04.798 real 0m1.542s 00:07:04.798 user 0m1.337s 00:07:04.798 sys 0m0.099s 00:07:04.798 05:45:12 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.798 ************************************ 00:07:04.798 END TEST thread_poller_perf 00:07:04.798 ************************************ 00:07:04.798 05:45:12 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:04.798 05:45:12 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:04.798 ************************************ 00:07:04.798 END TEST thread 00:07:04.798 ************************************ 00:07:04.798 00:07:04.798 real 0m3.457s 00:07:04.798 user 0m2.852s 00:07:04.798 sys 0m0.403s 00:07:04.798 05:45:12 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.798 05:45:12 thread -- common/autotest_common.sh@10 -- # set +x 00:07:04.798 05:45:12 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:04.798 05:45:12 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:04.798 05:45:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.798 05:45:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.798 05:45:12 -- common/autotest_common.sh@10 -- # set +x 00:07:04.798 ************************************ 00:07:04.798 START TEST app_cmdline 00:07:04.798 ************************************ 00:07:04.798 05:45:12 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:05.059 * Looking for test storage... 00:07:05.059 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:05.059 05:45:12 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.059 --rc genhtml_branch_coverage=1 00:07:05.059 --rc genhtml_function_coverage=1 00:07:05.059 --rc genhtml_legend=1 00:07:05.059 --rc geninfo_all_blocks=1 00:07:05.059 --rc geninfo_unexecuted_blocks=1 00:07:05.059 00:07:05.059 ' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.059 --rc genhtml_branch_coverage=1 00:07:05.059 --rc genhtml_function_coverage=1 00:07:05.059 --rc genhtml_legend=1 00:07:05.059 --rc geninfo_all_blocks=1 00:07:05.059 --rc geninfo_unexecuted_blocks=1 00:07:05.059 00:07:05.059 ' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.059 --rc genhtml_branch_coverage=1 00:07:05.059 --rc genhtml_function_coverage=1 00:07:05.059 --rc genhtml_legend=1 00:07:05.059 --rc geninfo_all_blocks=1 00:07:05.059 --rc geninfo_unexecuted_blocks=1 00:07:05.059 00:07:05.059 ' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:05.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:05.059 --rc genhtml_branch_coverage=1 00:07:05.059 --rc genhtml_function_coverage=1 00:07:05.059 --rc genhtml_legend=1 00:07:05.059 --rc geninfo_all_blocks=1 00:07:05.059 --rc geninfo_unexecuted_blocks=1 00:07:05.059 00:07:05.059 ' 00:07:05.059 05:45:12 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:05.059 05:45:12 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60871 00:07:05.059 05:45:12 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:05.059 05:45:12 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60871 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60871 ']' 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.059 05:45:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:05.059 [2024-12-12 05:45:12.533275] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:05.059 [2024-12-12 05:45:12.533484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60871 ] 00:07:05.319 [2024-12-12 05:45:12.706461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.319 [2024-12-12 05:45:12.823609] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.259 05:45:13 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.259 05:45:13 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:06.259 05:45:13 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:06.519 { 00:07:06.519 "version": "SPDK v25.01-pre git sha1 d58eef2a2", 00:07:06.519 "fields": { 00:07:06.519 "major": 25, 00:07:06.519 "minor": 1, 00:07:06.519 "patch": 0, 00:07:06.519 "suffix": "-pre", 00:07:06.519 "commit": "d58eef2a2" 00:07:06.519 } 00:07:06.519 } 00:07:06.519 05:45:13 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:06.519 05:45:13 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:06.519 05:45:13 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:06.519 05:45:13 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:06.519 05:45:13 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:06.519 05:45:13 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.520 05:45:13 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:06.520 05:45:13 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.520 05:45:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:06.520 05:45:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:06.520 05:45:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:06.520 05:45:13 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:06.780 request: 00:07:06.780 { 00:07:06.780 "method": "env_dpdk_get_mem_stats", 00:07:06.780 "req_id": 1 00:07:06.780 } 00:07:06.780 Got JSON-RPC error response 00:07:06.780 response: 00:07:06.780 { 00:07:06.780 "code": -32601, 00:07:06.780 "message": "Method not found" 00:07:06.780 } 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:06.780 05:45:14 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60871 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60871 ']' 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60871 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60871 00:07:06.780 killing process with pid 60871 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60871' 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@973 -- # kill 60871 00:07:06.780 05:45:14 app_cmdline -- common/autotest_common.sh@978 -- # wait 60871 00:07:09.322 ************************************ 00:07:09.322 END TEST app_cmdline 00:07:09.322 ************************************ 00:07:09.322 00:07:09.322 real 0m4.127s 00:07:09.322 user 0m4.292s 00:07:09.322 sys 0m0.600s 00:07:09.322 05:45:16 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.322 05:45:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.322 05:45:16 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:09.322 05:45:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.322 05:45:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.322 05:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.322 ************************************ 00:07:09.322 START TEST version 00:07:09.322 ************************************ 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:09.322 * Looking for test storage... 00:07:09.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.322 05:45:16 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.322 05:45:16 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.322 05:45:16 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.322 05:45:16 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.322 05:45:16 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.322 05:45:16 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.322 05:45:16 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.322 05:45:16 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.322 05:45:16 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.322 05:45:16 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.322 05:45:16 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.322 05:45:16 version -- scripts/common.sh@344 -- # case "$op" in 00:07:09.322 05:45:16 version -- scripts/common.sh@345 -- # : 1 00:07:09.322 05:45:16 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.322 05:45:16 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.322 05:45:16 version -- scripts/common.sh@365 -- # decimal 1 00:07:09.322 05:45:16 version -- scripts/common.sh@353 -- # local d=1 00:07:09.322 05:45:16 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.322 05:45:16 version -- scripts/common.sh@355 -- # echo 1 00:07:09.322 05:45:16 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.322 05:45:16 version -- scripts/common.sh@366 -- # decimal 2 00:07:09.322 05:45:16 version -- scripts/common.sh@353 -- # local d=2 00:07:09.322 05:45:16 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.322 05:45:16 version -- scripts/common.sh@355 -- # echo 2 00:07:09.322 05:45:16 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.322 05:45:16 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.322 05:45:16 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.322 05:45:16 version -- scripts/common.sh@368 -- # return 0 00:07:09.322 05:45:16 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.323 05:45:16 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.323 --rc genhtml_branch_coverage=1 00:07:09.323 --rc genhtml_function_coverage=1 00:07:09.323 --rc genhtml_legend=1 00:07:09.323 --rc geninfo_all_blocks=1 00:07:09.323 --rc geninfo_unexecuted_blocks=1 00:07:09.323 00:07:09.323 ' 00:07:09.323 05:45:16 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.323 --rc genhtml_branch_coverage=1 00:07:09.323 --rc genhtml_function_coverage=1 00:07:09.323 --rc genhtml_legend=1 00:07:09.323 --rc geninfo_all_blocks=1 00:07:09.323 --rc geninfo_unexecuted_blocks=1 00:07:09.323 00:07:09.323 ' 00:07:09.323 05:45:16 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.323 --rc genhtml_branch_coverage=1 00:07:09.323 --rc genhtml_function_coverage=1 00:07:09.323 --rc genhtml_legend=1 00:07:09.323 --rc geninfo_all_blocks=1 00:07:09.323 --rc geninfo_unexecuted_blocks=1 00:07:09.323 00:07:09.323 ' 00:07:09.323 05:45:16 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.323 --rc genhtml_branch_coverage=1 00:07:09.323 --rc genhtml_function_coverage=1 00:07:09.323 --rc genhtml_legend=1 00:07:09.323 --rc geninfo_all_blocks=1 00:07:09.323 --rc geninfo_unexecuted_blocks=1 00:07:09.323 00:07:09.323 ' 00:07:09.323 05:45:16 version -- app/version.sh@17 -- # get_header_version major 00:07:09.323 05:45:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.323 05:45:16 version -- app/version.sh@17 -- # major=25 00:07:09.323 05:45:16 version -- app/version.sh@18 -- # get_header_version minor 00:07:09.323 05:45:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.323 05:45:16 version -- app/version.sh@18 -- # minor=1 00:07:09.323 05:45:16 version -- app/version.sh@19 -- # get_header_version patch 00:07:09.323 05:45:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.323 05:45:16 version -- app/version.sh@19 -- # patch=0 00:07:09.323 05:45:16 version -- app/version.sh@20 -- # get_header_version suffix 00:07:09.323 05:45:16 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # cut -f2 00:07:09.323 05:45:16 version -- app/version.sh@14 -- # tr -d '"' 00:07:09.323 05:45:16 version -- app/version.sh@20 -- # suffix=-pre 00:07:09.323 05:45:16 version -- app/version.sh@22 -- # version=25.1 00:07:09.323 05:45:16 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:09.323 05:45:16 version -- app/version.sh@28 -- # version=25.1rc0 00:07:09.323 05:45:16 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:09.323 05:45:16 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:09.323 05:45:16 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:09.323 05:45:16 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:09.323 ************************************ 00:07:09.323 END TEST version 00:07:09.323 ************************************ 00:07:09.323 00:07:09.323 real 0m0.312s 00:07:09.323 user 0m0.177s 00:07:09.323 sys 0m0.192s 00:07:09.323 05:45:16 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.323 05:45:16 version -- common/autotest_common.sh@10 -- # set +x 00:07:09.323 05:45:16 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:09.323 05:45:16 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:09.323 05:45:16 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:09.323 05:45:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.323 05:45:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.323 05:45:16 -- common/autotest_common.sh@10 -- # set +x 00:07:09.323 ************************************ 00:07:09.323 START TEST bdev_raid 00:07:09.323 ************************************ 00:07:09.323 05:45:16 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:09.583 * Looking for test storage... 00:07:09.583 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.583 05:45:16 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.583 --rc genhtml_branch_coverage=1 00:07:09.583 --rc genhtml_function_coverage=1 00:07:09.583 --rc genhtml_legend=1 00:07:09.583 --rc geninfo_all_blocks=1 00:07:09.583 --rc geninfo_unexecuted_blocks=1 00:07:09.583 00:07:09.583 ' 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.583 --rc genhtml_branch_coverage=1 00:07:09.583 --rc genhtml_function_coverage=1 00:07:09.583 --rc genhtml_legend=1 00:07:09.583 --rc geninfo_all_blocks=1 00:07:09.583 --rc geninfo_unexecuted_blocks=1 00:07:09.583 00:07:09.583 ' 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.583 --rc genhtml_branch_coverage=1 00:07:09.583 --rc genhtml_function_coverage=1 00:07:09.583 --rc genhtml_legend=1 00:07:09.583 --rc geninfo_all_blocks=1 00:07:09.583 --rc geninfo_unexecuted_blocks=1 00:07:09.583 00:07:09.583 ' 00:07:09.583 05:45:16 bdev_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:09.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.583 --rc genhtml_branch_coverage=1 00:07:09.583 --rc genhtml_function_coverage=1 00:07:09.583 --rc genhtml_legend=1 00:07:09.583 --rc geninfo_all_blocks=1 00:07:09.583 --rc geninfo_unexecuted_blocks=1 00:07:09.583 00:07:09.583 ' 00:07:09.583 05:45:16 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:09.583 05:45:16 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:09.583 05:45:16 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:09.583 05:45:17 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:09.583 05:45:17 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:09.583 05:45:17 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:09.583 05:45:17 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:09.583 05:45:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.583 05:45:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.583 05:45:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:09.583 ************************************ 00:07:09.583 START TEST raid1_resize_data_offset_test 00:07:09.583 ************************************ 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=61063 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 61063' 00:07:09.583 Process raid pid: 61063 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 61063 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 61063 ']' 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.583 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.584 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.584 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:09.843 [2024-12-12 05:45:17.108213] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:09.843 [2024-12-12 05:45:17.108416] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.843 [2024-12-12 05:45:17.280639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.103 [2024-12-12 05:45:17.389679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.103 [2024-12-12 05:45:17.574166] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.103 [2024-12-12 05:45:17.574283] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 malloc0 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.674 05:45:17 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 malloc1 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 null0 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.674 [2024-12-12 05:45:18.099915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:10.674 [2024-12-12 05:45:18.101753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:10.674 [2024-12-12 05:45:18.101802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:10.674 [2024-12-12 05:45:18.101958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:10.674 [2024-12-12 05:45:18.101972] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:10.674 [2024-12-12 05:45:18.102216] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:10.674 [2024-12-12 05:45:18.102361] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:10.674 [2024-12-12 05:45:18.102374] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:10.674 [2024-12-12 05:45:18.102526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:10.674 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:10.675 [2024-12-12 05:45:18.159778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.675 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.243 malloc2 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.244 [2024-12-12 05:45:18.669393] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:11.244 [2024-12-12 05:45:18.685267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.244 [2024-12-12 05:45:18.687097] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 61063 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 61063 ']' 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 61063 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61063 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:11.244 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:11.507 killing process with pid 61063 00:07:11.507 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61063' 00:07:11.507 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 61063 00:07:11.507 [2024-12-12 05:45:18.765165] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.507 05:45:18 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 61063 00:07:11.507 [2024-12-12 05:45:18.766904] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:11.507 [2024-12-12 05:45:18.766986] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:11.507 [2024-12-12 05:45:18.767005] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:11.507 [2024-12-12 05:45:18.801149] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:11.507 [2024-12-12 05:45:18.801438] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:11.507 [2024-12-12 05:45:18.801452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:13.419 [2024-12-12 05:45:20.479393] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.358 05:45:21 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:14.358 00:07:14.358 real 0m4.527s 00:07:14.358 user 0m4.411s 00:07:14.358 sys 0m0.510s 00:07:14.358 05:45:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.358 05:45:21 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.358 ************************************ 00:07:14.358 END TEST raid1_resize_data_offset_test 00:07:14.358 ************************************ 00:07:14.358 05:45:21 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:14.358 05:45:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.358 05:45:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.358 05:45:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.359 ************************************ 00:07:14.359 START TEST raid0_resize_superblock_test 00:07:14.359 ************************************ 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61142 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61142' 00:07:14.359 Process raid pid: 61142 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61142 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61142 ']' 00:07:14.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.359 05:45:21 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.359 [2024-12-12 05:45:21.694831] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:14.359 [2024-12-12 05:45:21.694945] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.359 [2024-12-12 05:45:21.866172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.618 [2024-12-12 05:45:21.980283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.878 [2024-12-12 05:45:22.181728] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:14.878 [2024-12-12 05:45:22.181763] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.138 05:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:15.138 05:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:15.138 05:45:22 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:15.138 05:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.138 05:45:22 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 malloc0 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 [2024-12-12 05:45:23.008529] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.707 [2024-12-12 05:45:23.008587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.707 [2024-12-12 05:45:23.008610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:15.707 [2024-12-12 05:45:23.008623] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.707 [2024-12-12 05:45:23.010730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.707 [2024-12-12 05:45:23.010843] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.707 pt0 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 a563aefd-3553-4f67-896b-d2a70c3ebade 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 842feaf5-cb9a-47c5-b8f7-11de46e79a35 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 ce047e26-defd-4a9d-ac46-ef8250dca6b3 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.707 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.707 [2024-12-12 05:45:23.141390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 842feaf5-cb9a-47c5-b8f7-11de46e79a35 is claimed 00:07:15.707 [2024-12-12 05:45:23.141550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce047e26-defd-4a9d-ac46-ef8250dca6b3 is claimed 00:07:15.707 [2024-12-12 05:45:23.141711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:15.707 [2024-12-12 05:45:23.141731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:15.707 [2024-12-12 05:45:23.141980] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:15.707 [2024-12-12 05:45:23.142162] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:15.707 [2024-12-12 05:45:23.142173] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:15.708 [2024-12-12 05:45:23.142311] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:15.708 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 [2024-12-12 05:45:23.253398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 [2024-12-12 05:45:23.297265] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.968 [2024-12-12 05:45:23.297289] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '842feaf5-cb9a-47c5-b8f7-11de46e79a35' was resized: old size 131072, new size 204800 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 [2024-12-12 05:45:23.309184] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:15.968 [2024-12-12 05:45:23.309206] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'ce047e26-defd-4a9d-ac46-ef8250dca6b3' was resized: old size 131072, new size 204800 00:07:15.968 [2024-12-12 05:45:23.309234] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:15.968 [2024-12-12 05:45:23.421189] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 [2024-12-12 05:45:23.468823] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:15.968 [2024-12-12 05:45:23.468932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:15.968 [2024-12-12 05:45:23.468948] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:15.968 [2024-12-12 05:45:23.468961] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:15.968 [2024-12-12 05:45:23.469074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:15.968 [2024-12-12 05:45:23.469107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:15.968 [2024-12-12 05:45:23.469128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.968 [2024-12-12 05:45:23.480737] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:15.968 [2024-12-12 05:45:23.480784] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:15.968 [2024-12-12 05:45:23.480802] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:15.968 [2024-12-12 05:45:23.480811] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:15.968 [2024-12-12 05:45:23.482965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:15.968 [2024-12-12 05:45:23.483004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:15.968 [2024-12-12 05:45:23.484641] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 842feaf5-cb9a-47c5-b8f7-11de46e79a35 00:07:15.968 [2024-12-12 05:45:23.484711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 842feaf5-cb9a-47c5-b8f7-11de46e79a35 is claimed 00:07:15.968 [2024-12-12 05:45:23.484809] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev ce047e26-defd-4a9d-ac46-ef8250dca6b3 00:07:15.968 [2024-12-12 05:45:23.484826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev ce047e26-defd-4a9d-ac46-ef8250dca6b3 is claimed 00:07:15.968 [2024-12-12 05:45:23.484949] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev ce047e26-defd-4a9d-ac46-ef8250dca6b3 (2) smaller than existing raid bdev Raid (3) 00:07:15.968 [2024-12-12 05:45:23.484970] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 842feaf5-cb9a-47c5-b8f7-11de46e79a35: File exists 00:07:15.968 [2024-12-12 05:45:23.485007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:15.968 [2024-12-12 05:45:23.485018] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:15.968 [2024-12-12 05:45:23.485245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:15.968 [2024-12-12 05:45:23.485379] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:15.968 [2024-12-12 05:45:23.485387] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:15.968 [2024-12-12 05:45:23.485575] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:15.968 pt0 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:15.968 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:15.969 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.229 [2024-12-12 05:45:23.509179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61142 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61142 ']' 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61142 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61142 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:16.229 killing process with pid 61142 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61142' 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61142 00:07:16.229 [2024-12-12 05:45:23.589281] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:16.229 [2024-12-12 05:45:23.589340] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.229 [2024-12-12 05:45:23.589379] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.229 [2024-12-12 05:45:23.589387] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:16.229 05:45:23 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61142 00:07:17.608 [2024-12-12 05:45:24.935285] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:18.547 05:45:25 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:18.547 00:07:18.547 real 0m4.366s 00:07:18.547 user 0m4.575s 00:07:18.547 sys 0m0.545s 00:07:18.547 ************************************ 00:07:18.547 END TEST raid0_resize_superblock_test 00:07:18.547 ************************************ 00:07:18.547 05:45:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.547 05:45:25 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.547 05:45:26 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:18.547 05:45:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.547 05:45:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.547 05:45:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:18.547 ************************************ 00:07:18.547 START TEST raid1_resize_superblock_test 00:07:18.547 ************************************ 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=61235 00:07:18.547 Process raid pid: 61235 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 61235' 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 61235 00:07:18.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61235 ']' 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.547 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.806 [2024-12-12 05:45:26.137585] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:18.806 [2024-12-12 05:45:26.137795] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.806 [2024-12-12 05:45:26.311220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.066 [2024-12-12 05:45:26.412390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.326 [2024-12-12 05:45:26.610913] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.326 [2024-12-12 05:45:26.610993] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:19.585 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:19.585 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:19.585 05:45:26 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:19.585 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:19.585 05:45:26 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 malloc0 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 [2024-12-12 05:45:27.464577] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:20.155 [2024-12-12 05:45:27.464693] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.155 [2024-12-12 05:45:27.464737] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:20.155 [2024-12-12 05:45:27.464769] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.155 [2024-12-12 05:45:27.466930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.155 [2024-12-12 05:45:27.467004] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:20.155 pt0 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 469fc108-7709-400f-8340-b5053ae16331 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 882af07a-11e3-4df5-9246-4e1d215c2c96 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 2b7e548e-0516-4d8f-b1ed-d692375a4375 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.155 [2024-12-12 05:45:27.596403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 882af07a-11e3-4df5-9246-4e1d215c2c96 is claimed 00:07:20.155 [2024-12-12 05:45:27.596488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2b7e548e-0516-4d8f-b1ed-d692375a4375 is claimed 00:07:20.155 [2024-12-12 05:45:27.596657] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:20.155 [2024-12-12 05:45:27.596674] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:20.155 [2024-12-12 05:45:27.596950] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:20.155 [2024-12-12 05:45:27.597163] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:20.155 [2024-12-12 05:45:27.597175] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:20.155 [2024-12-12 05:45:27.597328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.155 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:20.156 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:20.416 [2024-12-12 05:45:27.708409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 [2024-12-12 05:45:27.752277] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.416 [2024-12-12 05:45:27.752302] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '882af07a-11e3-4df5-9246-4e1d215c2c96' was resized: old size 131072, new size 204800 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 [2024-12-12 05:45:27.760201] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:20.416 [2024-12-12 05:45:27.760266] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2b7e548e-0516-4d8f-b1ed-d692375a4375' was resized: old size 131072, new size 204800 00:07:20.416 [2024-12-12 05:45:27.760298] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:20.416 [2024-12-12 05:45:27.868111] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.416 [2024-12-12 05:45:27.915850] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:20.416 [2024-12-12 05:45:27.915913] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:20.416 [2024-12-12 05:45:27.915934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:20.416 [2024-12-12 05:45:27.916065] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:20.416 [2024-12-12 05:45:27.916222] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.416 [2024-12-12 05:45:27.916292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.416 [2024-12-12 05:45:27.916306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.416 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.417 [2024-12-12 05:45:27.927781] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:20.417 [2024-12-12 05:45:27.927828] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:20.417 [2024-12-12 05:45:27.927847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:20.417 [2024-12-12 05:45:27.927858] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:20.417 [2024-12-12 05:45:27.929866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:20.417 [2024-12-12 05:45:27.929902] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:20.417 [2024-12-12 05:45:27.931409] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 882af07a-11e3-4df5-9246-4e1d215c2c96 00:07:20.417 [2024-12-12 05:45:27.931570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 882af07a-11e3-4df5-9246-4e1d215c2c96 is claimed 00:07:20.417 [2024-12-12 05:45:27.931689] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2b7e548e-0516-4d8f-b1ed-d692375a4375 00:07:20.417 [2024-12-12 05:45:27.931708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2b7e548e-0516-4d8f-b1ed-d692375a4375 is claimed 00:07:20.417 [2024-12-12 05:45:27.931844] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 2b7e548e-0516-4d8f-b1ed-d692375a4375 (2) smaller than existing raid bdev Raid (3) 00:07:20.417 [2024-12-12 05:45:27.931865] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 882af07a-11e3-4df5-9246-4e1d215c2c96: File exists 00:07:20.417 [2024-12-12 05:45:27.931903] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:20.417 [2024-12-12 05:45:27.931914] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:20.417 [2024-12-12 05:45:27.932144] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:20.417 [2024-12-12 05:45:27.932308] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:20.417 [2024-12-12 05:45:27.932316] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:20.417 [2024-12-12 05:45:27.932486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:20.417 pt0 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.417 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.684 [2024-12-12 05:45:27.956052] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 61235 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61235 ']' 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61235 00:07:20.684 05:45:27 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61235 00:07:20.684 killing process with pid 61235 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61235' 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 61235 00:07:20.684 [2024-12-12 05:45:28.024131] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:20.684 [2024-12-12 05:45:28.024187] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:20.684 [2024-12-12 05:45:28.024226] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:20.684 [2024-12-12 05:45:28.024234] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:20.684 05:45:28 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 61235 00:07:22.064 [2024-12-12 05:45:29.377926] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:23.003 05:45:30 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:23.003 00:07:23.003 real 0m4.383s 00:07:23.003 user 0m4.577s 00:07:23.003 sys 0m0.541s 00:07:23.003 05:45:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:23.003 ************************************ 00:07:23.003 END TEST raid1_resize_superblock_test 00:07:23.003 ************************************ 00:07:23.003 05:45:30 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:23.003 05:45:30 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:23.003 05:45:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:23.003 05:45:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:23.003 05:45:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:23.263 ************************************ 00:07:23.263 START TEST raid_function_test_raid0 00:07:23.263 ************************************ 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:23.263 Process raid pid: 61338 00:07:23.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=61338 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61338' 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 61338 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 61338 ']' 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:23.263 05:45:30 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:23.263 [2024-12-12 05:45:30.616359] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:23.263 [2024-12-12 05:45:30.616584] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.522 [2024-12-12 05:45:30.787598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.522 [2024-12-12 05:45:30.900386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.782 [2024-12-12 05:45:31.082746] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.782 [2024-12-12 05:45:31.082784] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.042 Base_1 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.042 Base_2 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.042 [2024-12-12 05:45:31.516385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:24.042 [2024-12-12 05:45:31.518261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:24.042 [2024-12-12 05:45:31.518328] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:24.042 [2024-12-12 05:45:31.518340] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:24.042 [2024-12-12 05:45:31.518601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:24.042 [2024-12-12 05:45:31.518741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:24.042 [2024-12-12 05:45:31.518751] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:24.042 [2024-12-12 05:45:31.518944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:24.042 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:24.302 [2024-12-12 05:45:31.752036] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:24.302 /dev/nbd0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.302 1+0 records in 00:07:24.302 1+0 records out 00:07:24.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000576826 s, 7.1 MB/s 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:24.302 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:24.562 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:24.562 05:45:31 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.562 { 00:07:24.562 "nbd_device": "/dev/nbd0", 00:07:24.562 "bdev_name": "raid" 00:07:24.562 } 00:07:24.562 ]' 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.562 { 00:07:24.562 "nbd_device": "/dev/nbd0", 00:07:24.562 "bdev_name": "raid" 00:07:24.562 } 00:07:24.562 ]' 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:24.562 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:24.822 4096+0 records in 00:07:24.822 4096+0 records out 00:07:24.822 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0354447 s, 59.2 MB/s 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:24.822 4096+0 records in 00:07:24.822 4096+0 records out 00:07:24.822 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.183995 s, 11.4 MB/s 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:24.822 128+0 records in 00:07:24.822 128+0 records out 00:07:24.822 65536 bytes (66 kB, 64 KiB) copied, 0.00115537 s, 56.7 MB/s 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:24.822 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:25.083 2035+0 records in 00:07:25.083 2035+0 records out 00:07:25.083 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0159559 s, 65.3 MB/s 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:25.083 456+0 records in 00:07:25.083 456+0 records out 00:07:25.083 233472 bytes (233 kB, 228 KiB) copied, 0.00219668 s, 106 MB/s 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.083 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.343 [2024-12-12 05:45:32.613571] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:25.343 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 61338 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 61338 ']' 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 61338 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61338 00:07:25.603 killing process with pid 61338 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61338' 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 61338 00:07:25.603 [2024-12-12 05:45:32.907008] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:25.603 [2024-12-12 05:45:32.907107] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:25.603 [2024-12-12 05:45:32.907156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:25.603 [2024-12-12 05:45:32.907171] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:25.603 05:45:32 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 61338 00:07:25.603 [2024-12-12 05:45:33.104649] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:26.983 ************************************ 00:07:26.983 END TEST raid_function_test_raid0 00:07:26.983 ************************************ 00:07:26.983 05:45:34 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:26.983 00:07:26.983 real 0m3.620s 00:07:26.983 user 0m4.172s 00:07:26.983 sys 0m0.897s 00:07:26.983 05:45:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.983 05:45:34 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:26.983 05:45:34 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:26.983 05:45:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.983 05:45:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.983 05:45:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:26.983 ************************************ 00:07:26.983 START TEST raid_function_test_concat 00:07:26.983 ************************************ 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=61463 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 61463' 00:07:26.983 Process raid pid: 61463 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 61463 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 61463 ']' 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.983 05:45:34 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:26.983 [2024-12-12 05:45:34.305182] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:26.983 [2024-12-12 05:45:34.305310] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.983 [2024-12-12 05:45:34.477382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.243 [2024-12-12 05:45:34.583209] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.502 [2024-12-12 05:45:34.780640] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.502 [2024-12-12 05:45:34.780678] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.761 Base_1 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.761 Base_2 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.761 [2024-12-12 05:45:35.214285] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:27.761 [2024-12-12 05:45:35.216043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:27.761 [2024-12-12 05:45:35.216114] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.761 [2024-12-12 05:45:35.216127] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:27.761 [2024-12-12 05:45:35.216384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.761 [2024-12-12 05:45:35.216572] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.761 [2024-12-12 05:45:35.216591] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:27.761 [2024-12-12 05:45:35.216749] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:27.761 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:28.020 [2024-12-12 05:45:35.453921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:28.020 /dev/nbd0 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:28.020 1+0 records in 00:07:28.020 1+0 records out 00:07:28.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000404021 s, 10.1 MB/s 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.020 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:28.280 { 00:07:28.280 "nbd_device": "/dev/nbd0", 00:07:28.280 "bdev_name": "raid" 00:07:28.280 } 00:07:28.280 ]' 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:28.280 { 00:07:28.280 "nbd_device": "/dev/nbd0", 00:07:28.280 "bdev_name": "raid" 00:07:28.280 } 00:07:28.280 ]' 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:28.280 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:28.540 4096+0 records in 00:07:28.540 4096+0 records out 00:07:28.540 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.036622 s, 57.3 MB/s 00:07:28.540 05:45:35 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:28.540 4096+0 records in 00:07:28.540 4096+0 records out 00:07:28.540 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.181053 s, 11.6 MB/s 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:28.540 128+0 records in 00:07:28.540 128+0 records out 00:07:28.540 65536 bytes (66 kB, 64 KiB) copied, 0.00114832 s, 57.1 MB/s 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.540 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:28.800 2035+0 records in 00:07:28.800 2035+0 records out 00:07:28.800 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0140143 s, 74.3 MB/s 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:28.800 456+0 records in 00:07:28.800 456+0 records out 00:07:28.800 233472 bytes (233 kB, 228 KiB) copied, 0.00465524 s, 50.2 MB/s 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.800 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:29.061 [2024-12-12 05:45:36.340931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:29.061 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 61463 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 61463 ']' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 61463 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61463 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.321 killing process with pid 61463 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61463' 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 61463 00:07:29.321 [2024-12-12 05:45:36.644325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:29.321 [2024-12-12 05:45:36.644428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.321 05:45:36 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 61463 00:07:29.321 [2024-12-12 05:45:36.644488] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:29.321 [2024-12-12 05:45:36.644511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:29.321 [2024-12-12 05:45:36.837525] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.747 05:45:37 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:30.747 00:07:30.747 real 0m3.666s 00:07:30.747 user 0m4.239s 00:07:30.747 sys 0m0.914s 00:07:30.747 05:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.747 05:45:37 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 ************************************ 00:07:30.747 END TEST raid_function_test_concat 00:07:30.747 ************************************ 00:07:30.747 05:45:37 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:30.747 05:45:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.747 05:45:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.747 05:45:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 ************************************ 00:07:30.747 START TEST raid0_resize_test 00:07:30.747 ************************************ 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61586 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:30.747 Process raid pid: 61586 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61586' 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61586 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61586 ']' 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.747 05:45:37 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.747 [2024-12-12 05:45:38.040432] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:30.747 [2024-12-12 05:45:38.040566] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.747 [2024-12-12 05:45:38.215990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.007 [2024-12-12 05:45:38.316105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.007 [2024-12-12 05:45:38.514094] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.007 [2024-12-12 05:45:38.514135] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 Base_1 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 Base_2 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 [2024-12-12 05:45:38.900203] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:31.578 [2024-12-12 05:45:38.901918] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:31.578 [2024-12-12 05:45:38.901989] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:31.578 [2024-12-12 05:45:38.902001] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:31.578 [2024-12-12 05:45:38.902244] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:31.578 [2024-12-12 05:45:38.902398] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:31.578 [2024-12-12 05:45:38.902413] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:31.578 [2024-12-12 05:45:38.902570] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 [2024-12-12 05:45:38.912162] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.578 [2024-12-12 05:45:38.912190] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:31.578 true 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 [2024-12-12 05:45:38.928296] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.578 [2024-12-12 05:45:38.976042] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:31.578 [2024-12-12 05:45:38.976065] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:31.578 [2024-12-12 05:45:38.976092] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:31.578 true 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.578 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:31.579 05:45:38 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:31.579 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:31.579 05:45:38 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.579 [2024-12-12 05:45:38.988175] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61586 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61586 ']' 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 61586 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61586 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.579 killing process with pid 61586 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61586' 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 61586 00:07:31.579 [2024-12-12 05:45:39.072950] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:31.579 [2024-12-12 05:45:39.073019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:31.579 [2024-12-12 05:45:39.073068] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:31.579 [2024-12-12 05:45:39.073080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:31.579 05:45:39 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 61586 00:07:31.579 [2024-12-12 05:45:39.089782] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:32.961 05:45:40 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:32.961 00:07:32.961 real 0m2.178s 00:07:32.961 user 0m2.325s 00:07:32.961 sys 0m0.334s 00:07:32.962 05:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.962 05:45:40 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 ************************************ 00:07:32.962 END TEST raid0_resize_test 00:07:32.962 ************************************ 00:07:32.962 05:45:40 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:32.962 05:45:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:32.962 05:45:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.962 05:45:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 ************************************ 00:07:32.962 START TEST raid1_resize_test 00:07:32.962 ************************************ 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=61642 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 61642' 00:07:32.962 Process raid pid: 61642 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 61642 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 61642 ']' 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.962 05:45:40 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 [2024-12-12 05:45:40.289099] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:32.962 [2024-12-12 05:45:40.289227] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:32.962 [2024-12-12 05:45:40.455944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.221 [2024-12-12 05:45:40.572426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.480 [2024-12-12 05:45:40.771138] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.480 [2024-12-12 05:45:40.771179] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 Base_1 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 Base_2 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 [2024-12-12 05:45:41.136145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:33.739 [2024-12-12 05:45:41.137876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:33.739 [2024-12-12 05:45:41.137959] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:33.739 [2024-12-12 05:45:41.137970] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:33.739 [2024-12-12 05:45:41.138220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:33.739 [2024-12-12 05:45:41.138344] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:33.739 [2024-12-12 05:45:41.138355] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:33.739 [2024-12-12 05:45:41.138508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 [2024-12-12 05:45:41.148110] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.739 [2024-12-12 05:45:41.148142] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:33.739 true 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 [2024-12-12 05:45:41.164256] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.739 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.739 [2024-12-12 05:45:41.207992] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:33.739 [2024-12-12 05:45:41.208018] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:33.739 [2024-12-12 05:45:41.208041] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:33.740 true 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.740 [2024-12-12 05:45:41.220124] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.740 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 61642 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 61642 ']' 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 61642 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61642 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.999 killing process with pid 61642 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61642' 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 61642 00:07:33.999 [2024-12-12 05:45:41.306615] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.999 [2024-12-12 05:45:41.306680] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.999 05:45:41 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 61642 00:07:33.999 [2024-12-12 05:45:41.307128] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.999 [2024-12-12 05:45:41.307153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:33.999 [2024-12-12 05:45:41.323527] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:34.939 05:45:42 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:34.939 00:07:34.939 real 0m2.159s 00:07:34.939 user 0m2.305s 00:07:34.939 sys 0m0.321s 00:07:34.939 05:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.939 05:45:42 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 ************************************ 00:07:34.939 END TEST raid1_resize_test 00:07:34.939 ************************************ 00:07:34.939 05:45:42 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:34.939 05:45:42 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:34.939 05:45:42 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:34.939 05:45:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:34.939 05:45:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.939 05:45:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:34.939 ************************************ 00:07:34.939 START TEST raid_state_function_test 00:07:34.939 ************************************ 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61699 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61699' 00:07:34.939 Process raid pid: 61699 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61699 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61699 ']' 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.939 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.940 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.940 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.940 05:45:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.200 [2024-12-12 05:45:42.524963] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:35.200 [2024-12-12 05:45:42.525075] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.200 [2024-12-12 05:45:42.698097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.459 [2024-12-12 05:45:42.803629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.719 [2024-12-12 05:45:42.991308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.719 [2024-12-12 05:45:42.991350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.979 [2024-12-12 05:45:43.352684] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.979 [2024-12-12 05:45:43.352751] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.979 [2024-12-12 05:45:43.352778] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.979 [2024-12-12 05:45:43.352787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:35.979 "name": "Existed_Raid", 00:07:35.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.979 "strip_size_kb": 64, 00:07:35.979 "state": "configuring", 00:07:35.979 "raid_level": "raid0", 00:07:35.979 "superblock": false, 00:07:35.979 "num_base_bdevs": 2, 00:07:35.979 "num_base_bdevs_discovered": 0, 00:07:35.979 "num_base_bdevs_operational": 2, 00:07:35.979 "base_bdevs_list": [ 00:07:35.979 { 00:07:35.979 "name": "BaseBdev1", 00:07:35.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.979 "is_configured": false, 00:07:35.979 "data_offset": 0, 00:07:35.979 "data_size": 0 00:07:35.979 }, 00:07:35.979 { 00:07:35.979 "name": "BaseBdev2", 00:07:35.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.979 "is_configured": false, 00:07:35.979 "data_offset": 0, 00:07:35.979 "data_size": 0 00:07:35.979 } 00:07:35.979 ] 00:07:35.979 }' 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:35.979 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.550 [2024-12-12 05:45:43.843793] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:36.550 [2024-12-12 05:45:43.843832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.550 [2024-12-12 05:45:43.851766] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:36.550 [2024-12-12 05:45:43.851810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:36.550 [2024-12-12 05:45:43.851819] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:36.550 [2024-12-12 05:45:43.851830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.550 [2024-12-12 05:45:43.893025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.550 BaseBdev1 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.550 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.551 [ 00:07:36.551 { 00:07:36.551 "name": "BaseBdev1", 00:07:36.551 "aliases": [ 00:07:36.551 "96bf0a88-18ac-486f-8b12-317162dca49b" 00:07:36.551 ], 00:07:36.551 "product_name": "Malloc disk", 00:07:36.551 "block_size": 512, 00:07:36.551 "num_blocks": 65536, 00:07:36.551 "uuid": "96bf0a88-18ac-486f-8b12-317162dca49b", 00:07:36.551 "assigned_rate_limits": { 00:07:36.551 "rw_ios_per_sec": 0, 00:07:36.551 "rw_mbytes_per_sec": 0, 00:07:36.551 "r_mbytes_per_sec": 0, 00:07:36.551 "w_mbytes_per_sec": 0 00:07:36.551 }, 00:07:36.551 "claimed": true, 00:07:36.551 "claim_type": "exclusive_write", 00:07:36.551 "zoned": false, 00:07:36.551 "supported_io_types": { 00:07:36.551 "read": true, 00:07:36.551 "write": true, 00:07:36.551 "unmap": true, 00:07:36.551 "flush": true, 00:07:36.551 "reset": true, 00:07:36.551 "nvme_admin": false, 00:07:36.551 "nvme_io": false, 00:07:36.551 "nvme_io_md": false, 00:07:36.551 "write_zeroes": true, 00:07:36.551 "zcopy": true, 00:07:36.551 "get_zone_info": false, 00:07:36.551 "zone_management": false, 00:07:36.551 "zone_append": false, 00:07:36.551 "compare": false, 00:07:36.551 "compare_and_write": false, 00:07:36.551 "abort": true, 00:07:36.551 "seek_hole": false, 00:07:36.551 "seek_data": false, 00:07:36.551 "copy": true, 00:07:36.551 "nvme_iov_md": false 00:07:36.551 }, 00:07:36.551 "memory_domains": [ 00:07:36.551 { 00:07:36.551 "dma_device_id": "system", 00:07:36.551 "dma_device_type": 1 00:07:36.551 }, 00:07:36.551 { 00:07:36.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.551 "dma_device_type": 2 00:07:36.551 } 00:07:36.551 ], 00:07:36.551 "driver_specific": {} 00:07:36.551 } 00:07:36.551 ] 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:36.551 "name": "Existed_Raid", 00:07:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.551 "strip_size_kb": 64, 00:07:36.551 "state": "configuring", 00:07:36.551 "raid_level": "raid0", 00:07:36.551 "superblock": false, 00:07:36.551 "num_base_bdevs": 2, 00:07:36.551 "num_base_bdevs_discovered": 1, 00:07:36.551 "num_base_bdevs_operational": 2, 00:07:36.551 "base_bdevs_list": [ 00:07:36.551 { 00:07:36.551 "name": "BaseBdev1", 00:07:36.551 "uuid": "96bf0a88-18ac-486f-8b12-317162dca49b", 00:07:36.551 "is_configured": true, 00:07:36.551 "data_offset": 0, 00:07:36.551 "data_size": 65536 00:07:36.551 }, 00:07:36.551 { 00:07:36.551 "name": "BaseBdev2", 00:07:36.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.551 "is_configured": false, 00:07:36.551 "data_offset": 0, 00:07:36.551 "data_size": 0 00:07:36.551 } 00:07:36.551 ] 00:07:36.551 }' 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:36.551 05:45:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.121 [2024-12-12 05:45:44.364233] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.121 [2024-12-12 05:45:44.364284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.121 [2024-12-12 05:45:44.376255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.121 [2024-12-12 05:45:44.378056] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.121 [2024-12-12 05:45:44.378100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.121 "name": "Existed_Raid", 00:07:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.121 "strip_size_kb": 64, 00:07:37.121 "state": "configuring", 00:07:37.121 "raid_level": "raid0", 00:07:37.121 "superblock": false, 00:07:37.121 "num_base_bdevs": 2, 00:07:37.121 "num_base_bdevs_discovered": 1, 00:07:37.121 "num_base_bdevs_operational": 2, 00:07:37.121 "base_bdevs_list": [ 00:07:37.121 { 00:07:37.121 "name": "BaseBdev1", 00:07:37.121 "uuid": "96bf0a88-18ac-486f-8b12-317162dca49b", 00:07:37.121 "is_configured": true, 00:07:37.121 "data_offset": 0, 00:07:37.121 "data_size": 65536 00:07:37.121 }, 00:07:37.121 { 00:07:37.121 "name": "BaseBdev2", 00:07:37.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:37.121 "is_configured": false, 00:07:37.121 "data_offset": 0, 00:07:37.121 "data_size": 0 00:07:37.121 } 00:07:37.121 ] 00:07:37.121 }' 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.121 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 [2024-12-12 05:45:44.865420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:37.382 [2024-12-12 05:45:44.865473] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:37.382 [2024-12-12 05:45:44.865498] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.382 [2024-12-12 05:45:44.865883] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.382 [2024-12-12 05:45:44.866084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:37.382 [2024-12-12 05:45:44.866106] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:37.382 [2024-12-12 05:45:44.866391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.382 BaseBdev2 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.382 [ 00:07:37.382 { 00:07:37.382 "name": "BaseBdev2", 00:07:37.382 "aliases": [ 00:07:37.382 "7f57c3ea-7769-4935-9936-f4c9676ab1ac" 00:07:37.382 ], 00:07:37.382 "product_name": "Malloc disk", 00:07:37.382 "block_size": 512, 00:07:37.382 "num_blocks": 65536, 00:07:37.382 "uuid": "7f57c3ea-7769-4935-9936-f4c9676ab1ac", 00:07:37.382 "assigned_rate_limits": { 00:07:37.382 "rw_ios_per_sec": 0, 00:07:37.382 "rw_mbytes_per_sec": 0, 00:07:37.382 "r_mbytes_per_sec": 0, 00:07:37.382 "w_mbytes_per_sec": 0 00:07:37.382 }, 00:07:37.382 "claimed": true, 00:07:37.382 "claim_type": "exclusive_write", 00:07:37.382 "zoned": false, 00:07:37.382 "supported_io_types": { 00:07:37.382 "read": true, 00:07:37.382 "write": true, 00:07:37.382 "unmap": true, 00:07:37.382 "flush": true, 00:07:37.382 "reset": true, 00:07:37.382 "nvme_admin": false, 00:07:37.382 "nvme_io": false, 00:07:37.382 "nvme_io_md": false, 00:07:37.382 "write_zeroes": true, 00:07:37.382 "zcopy": true, 00:07:37.382 "get_zone_info": false, 00:07:37.382 "zone_management": false, 00:07:37.382 "zone_append": false, 00:07:37.382 "compare": false, 00:07:37.382 "compare_and_write": false, 00:07:37.382 "abort": true, 00:07:37.382 "seek_hole": false, 00:07:37.382 "seek_data": false, 00:07:37.382 "copy": true, 00:07:37.382 "nvme_iov_md": false 00:07:37.382 }, 00:07:37.382 "memory_domains": [ 00:07:37.382 { 00:07:37.382 "dma_device_id": "system", 00:07:37.382 "dma_device_type": 1 00:07:37.382 }, 00:07:37.382 { 00:07:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.382 "dma_device_type": 2 00:07:37.382 } 00:07:37.382 ], 00:07:37.382 "driver_specific": {} 00:07:37.382 } 00:07:37.382 ] 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.382 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:37.642 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:37.643 "name": "Existed_Raid", 00:07:37.643 "uuid": "1f177203-b840-49ba-b002-9d648710d376", 00:07:37.643 "strip_size_kb": 64, 00:07:37.643 "state": "online", 00:07:37.643 "raid_level": "raid0", 00:07:37.643 "superblock": false, 00:07:37.643 "num_base_bdevs": 2, 00:07:37.643 "num_base_bdevs_discovered": 2, 00:07:37.643 "num_base_bdevs_operational": 2, 00:07:37.643 "base_bdevs_list": [ 00:07:37.643 { 00:07:37.643 "name": "BaseBdev1", 00:07:37.643 "uuid": "96bf0a88-18ac-486f-8b12-317162dca49b", 00:07:37.643 "is_configured": true, 00:07:37.643 "data_offset": 0, 00:07:37.643 "data_size": 65536 00:07:37.643 }, 00:07:37.643 { 00:07:37.643 "name": "BaseBdev2", 00:07:37.643 "uuid": "7f57c3ea-7769-4935-9936-f4c9676ab1ac", 00:07:37.643 "is_configured": true, 00:07:37.643 "data_offset": 0, 00:07:37.643 "data_size": 65536 00:07:37.643 } 00:07:37.643 ] 00:07:37.643 }' 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:37.643 05:45:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.903 [2024-12-12 05:45:45.352874] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.903 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:37.903 "name": "Existed_Raid", 00:07:37.903 "aliases": [ 00:07:37.903 "1f177203-b840-49ba-b002-9d648710d376" 00:07:37.903 ], 00:07:37.903 "product_name": "Raid Volume", 00:07:37.903 "block_size": 512, 00:07:37.903 "num_blocks": 131072, 00:07:37.903 "uuid": "1f177203-b840-49ba-b002-9d648710d376", 00:07:37.903 "assigned_rate_limits": { 00:07:37.903 "rw_ios_per_sec": 0, 00:07:37.903 "rw_mbytes_per_sec": 0, 00:07:37.903 "r_mbytes_per_sec": 0, 00:07:37.903 "w_mbytes_per_sec": 0 00:07:37.903 }, 00:07:37.903 "claimed": false, 00:07:37.903 "zoned": false, 00:07:37.903 "supported_io_types": { 00:07:37.903 "read": true, 00:07:37.903 "write": true, 00:07:37.903 "unmap": true, 00:07:37.903 "flush": true, 00:07:37.903 "reset": true, 00:07:37.903 "nvme_admin": false, 00:07:37.903 "nvme_io": false, 00:07:37.903 "nvme_io_md": false, 00:07:37.903 "write_zeroes": true, 00:07:37.903 "zcopy": false, 00:07:37.903 "get_zone_info": false, 00:07:37.903 "zone_management": false, 00:07:37.903 "zone_append": false, 00:07:37.903 "compare": false, 00:07:37.903 "compare_and_write": false, 00:07:37.903 "abort": false, 00:07:37.903 "seek_hole": false, 00:07:37.903 "seek_data": false, 00:07:37.903 "copy": false, 00:07:37.903 "nvme_iov_md": false 00:07:37.903 }, 00:07:37.903 "memory_domains": [ 00:07:37.903 { 00:07:37.903 "dma_device_id": "system", 00:07:37.903 "dma_device_type": 1 00:07:37.903 }, 00:07:37.903 { 00:07:37.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.903 "dma_device_type": 2 00:07:37.903 }, 00:07:37.903 { 00:07:37.903 "dma_device_id": "system", 00:07:37.903 "dma_device_type": 1 00:07:37.903 }, 00:07:37.903 { 00:07:37.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:37.903 "dma_device_type": 2 00:07:37.903 } 00:07:37.903 ], 00:07:37.903 "driver_specific": { 00:07:37.903 "raid": { 00:07:37.903 "uuid": "1f177203-b840-49ba-b002-9d648710d376", 00:07:37.903 "strip_size_kb": 64, 00:07:37.903 "state": "online", 00:07:37.903 "raid_level": "raid0", 00:07:37.903 "superblock": false, 00:07:37.903 "num_base_bdevs": 2, 00:07:37.903 "num_base_bdevs_discovered": 2, 00:07:37.903 "num_base_bdevs_operational": 2, 00:07:37.903 "base_bdevs_list": [ 00:07:37.903 { 00:07:37.903 "name": "BaseBdev1", 00:07:37.903 "uuid": "96bf0a88-18ac-486f-8b12-317162dca49b", 00:07:37.903 "is_configured": true, 00:07:37.903 "data_offset": 0, 00:07:37.903 "data_size": 65536 00:07:37.903 }, 00:07:37.903 { 00:07:37.903 "name": "BaseBdev2", 00:07:37.903 "uuid": "7f57c3ea-7769-4935-9936-f4c9676ab1ac", 00:07:37.903 "is_configured": true, 00:07:37.903 "data_offset": 0, 00:07:37.903 "data_size": 65536 00:07:37.903 } 00:07:37.903 ] 00:07:37.903 } 00:07:37.903 } 00:07:37.904 }' 00:07:37.904 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:38.164 BaseBdev2' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 [2024-12-12 05:45:45.548335] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:38.164 [2024-12-12 05:45:45.548372] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:38.164 [2024-12-12 05:45:45.548418] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.164 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.424 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:38.424 "name": "Existed_Raid", 00:07:38.424 "uuid": "1f177203-b840-49ba-b002-9d648710d376", 00:07:38.424 "strip_size_kb": 64, 00:07:38.424 "state": "offline", 00:07:38.424 "raid_level": "raid0", 00:07:38.424 "superblock": false, 00:07:38.424 "num_base_bdevs": 2, 00:07:38.424 "num_base_bdevs_discovered": 1, 00:07:38.424 "num_base_bdevs_operational": 1, 00:07:38.424 "base_bdevs_list": [ 00:07:38.424 { 00:07:38.424 "name": null, 00:07:38.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.424 "is_configured": false, 00:07:38.424 "data_offset": 0, 00:07:38.424 "data_size": 65536 00:07:38.424 }, 00:07:38.424 { 00:07:38.424 "name": "BaseBdev2", 00:07:38.424 "uuid": "7f57c3ea-7769-4935-9936-f4c9676ab1ac", 00:07:38.424 "is_configured": true, 00:07:38.424 "data_offset": 0, 00:07:38.424 "data_size": 65536 00:07:38.424 } 00:07:38.424 ] 00:07:38.424 }' 00:07:38.424 05:45:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:38.424 05:45:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.684 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.684 [2024-12-12 05:45:46.134316] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:38.684 [2024-12-12 05:45:46.134375] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61699 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61699 ']' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61699 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61699 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.944 killing process with pid 61699 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61699' 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61699 00:07:38.944 [2024-12-12 05:45:46.319192] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.944 05:45:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61699 00:07:38.944 [2024-12-12 05:45:46.335871] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:40.329 00:07:40.329 real 0m5.009s 00:07:40.329 user 0m7.296s 00:07:40.329 sys 0m0.775s 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.329 ************************************ 00:07:40.329 END TEST raid_state_function_test 00:07:40.329 ************************************ 00:07:40.329 05:45:47 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:40.329 05:45:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:40.329 05:45:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.329 05:45:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.329 ************************************ 00:07:40.329 START TEST raid_state_function_test_sb 00:07:40.329 ************************************ 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61952 00:07:40.329 Process raid pid: 61952 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61952' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61952 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61952 ']' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.329 05:45:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:40.329 [2024-12-12 05:45:47.603024] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:40.329 [2024-12-12 05:45:47.603159] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.329 [2024-12-12 05:45:47.760218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.589 [2024-12-12 05:45:47.871551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.589 [2024-12-12 05:45:48.071658] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.589 [2024-12-12 05:45:48.071692] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.158 [2024-12-12 05:45:48.435760] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.158 [2024-12-12 05:45:48.435843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.158 [2024-12-12 05:45:48.435854] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.158 [2024-12-12 05:45:48.435863] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.158 "name": "Existed_Raid", 00:07:41.158 "uuid": "a0e1dc41-dcb1-48c2-8f75-35cd90013e43", 00:07:41.158 "strip_size_kb": 64, 00:07:41.158 "state": "configuring", 00:07:41.158 "raid_level": "raid0", 00:07:41.158 "superblock": true, 00:07:41.158 "num_base_bdevs": 2, 00:07:41.158 "num_base_bdevs_discovered": 0, 00:07:41.158 "num_base_bdevs_operational": 2, 00:07:41.158 "base_bdevs_list": [ 00:07:41.158 { 00:07:41.158 "name": "BaseBdev1", 00:07:41.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.158 "is_configured": false, 00:07:41.158 "data_offset": 0, 00:07:41.158 "data_size": 0 00:07:41.158 }, 00:07:41.158 { 00:07:41.158 "name": "BaseBdev2", 00:07:41.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.158 "is_configured": false, 00:07:41.158 "data_offset": 0, 00:07:41.158 "data_size": 0 00:07:41.158 } 00:07:41.158 ] 00:07:41.158 }' 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.158 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.418 [2024-12-12 05:45:48.886919] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.418 [2024-12-12 05:45:48.886957] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.418 [2024-12-12 05:45:48.898905] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:41.418 [2024-12-12 05:45:48.898945] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:41.418 [2024-12-12 05:45:48.898955] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.418 [2024-12-12 05:45:48.898966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.418 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.679 [2024-12-12 05:45:48.944144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.679 BaseBdev1 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.679 [ 00:07:41.679 { 00:07:41.679 "name": "BaseBdev1", 00:07:41.679 "aliases": [ 00:07:41.679 "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d" 00:07:41.679 ], 00:07:41.679 "product_name": "Malloc disk", 00:07:41.679 "block_size": 512, 00:07:41.679 "num_blocks": 65536, 00:07:41.679 "uuid": "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d", 00:07:41.679 "assigned_rate_limits": { 00:07:41.679 "rw_ios_per_sec": 0, 00:07:41.679 "rw_mbytes_per_sec": 0, 00:07:41.679 "r_mbytes_per_sec": 0, 00:07:41.679 "w_mbytes_per_sec": 0 00:07:41.679 }, 00:07:41.679 "claimed": true, 00:07:41.679 "claim_type": "exclusive_write", 00:07:41.679 "zoned": false, 00:07:41.679 "supported_io_types": { 00:07:41.679 "read": true, 00:07:41.679 "write": true, 00:07:41.679 "unmap": true, 00:07:41.679 "flush": true, 00:07:41.679 "reset": true, 00:07:41.679 "nvme_admin": false, 00:07:41.679 "nvme_io": false, 00:07:41.679 "nvme_io_md": false, 00:07:41.679 "write_zeroes": true, 00:07:41.679 "zcopy": true, 00:07:41.679 "get_zone_info": false, 00:07:41.679 "zone_management": false, 00:07:41.679 "zone_append": false, 00:07:41.679 "compare": false, 00:07:41.679 "compare_and_write": false, 00:07:41.679 "abort": true, 00:07:41.679 "seek_hole": false, 00:07:41.679 "seek_data": false, 00:07:41.679 "copy": true, 00:07:41.679 "nvme_iov_md": false 00:07:41.679 }, 00:07:41.679 "memory_domains": [ 00:07:41.679 { 00:07:41.679 "dma_device_id": "system", 00:07:41.679 "dma_device_type": 1 00:07:41.679 }, 00:07:41.679 { 00:07:41.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.679 "dma_device_type": 2 00:07:41.679 } 00:07:41.679 ], 00:07:41.679 "driver_specific": {} 00:07:41.679 } 00:07:41.679 ] 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.679 05:45:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.679 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.679 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:41.679 "name": "Existed_Raid", 00:07:41.679 "uuid": "1643fa40-d45a-4610-8f26-1dd43d4c6d39", 00:07:41.679 "strip_size_kb": 64, 00:07:41.679 "state": "configuring", 00:07:41.679 "raid_level": "raid0", 00:07:41.679 "superblock": true, 00:07:41.679 "num_base_bdevs": 2, 00:07:41.679 "num_base_bdevs_discovered": 1, 00:07:41.679 "num_base_bdevs_operational": 2, 00:07:41.679 "base_bdevs_list": [ 00:07:41.679 { 00:07:41.679 "name": "BaseBdev1", 00:07:41.679 "uuid": "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d", 00:07:41.679 "is_configured": true, 00:07:41.679 "data_offset": 2048, 00:07:41.679 "data_size": 63488 00:07:41.679 }, 00:07:41.679 { 00:07:41.679 "name": "BaseBdev2", 00:07:41.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.679 "is_configured": false, 00:07:41.679 "data_offset": 0, 00:07:41.679 "data_size": 0 00:07:41.679 } 00:07:41.679 ] 00:07:41.679 }' 00:07:41.679 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:41.679 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.939 [2024-12-12 05:45:49.399443] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:41.939 [2024-12-12 05:45:49.399491] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.939 [2024-12-12 05:45:49.411476] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:41.939 [2024-12-12 05:45:49.413241] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:41.939 [2024-12-12 05:45:49.413297] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.939 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.940 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:41.940 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.199 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.199 "name": "Existed_Raid", 00:07:42.199 "uuid": "28b52644-c7f7-4727-bf92-e755e749c396", 00:07:42.199 "strip_size_kb": 64, 00:07:42.199 "state": "configuring", 00:07:42.199 "raid_level": "raid0", 00:07:42.199 "superblock": true, 00:07:42.199 "num_base_bdevs": 2, 00:07:42.199 "num_base_bdevs_discovered": 1, 00:07:42.199 "num_base_bdevs_operational": 2, 00:07:42.199 "base_bdevs_list": [ 00:07:42.199 { 00:07:42.199 "name": "BaseBdev1", 00:07:42.199 "uuid": "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d", 00:07:42.199 "is_configured": true, 00:07:42.199 "data_offset": 2048, 00:07:42.199 "data_size": 63488 00:07:42.199 }, 00:07:42.199 { 00:07:42.199 "name": "BaseBdev2", 00:07:42.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:42.199 "is_configured": false, 00:07:42.199 "data_offset": 0, 00:07:42.199 "data_size": 0 00:07:42.199 } 00:07:42.199 ] 00:07:42.199 }' 00:07:42.199 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.199 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.459 [2024-12-12 05:45:49.837214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.459 [2024-12-12 05:45:49.837469] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:42.459 [2024-12-12 05:45:49.837523] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.459 [2024-12-12 05:45:49.837896] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:42.459 BaseBdev2 00:07:42.459 [2024-12-12 05:45:49.838084] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:42.459 [2024-12-12 05:45:49.838101] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:42.459 [2024-12-12 05:45:49.838281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.459 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.460 [ 00:07:42.460 { 00:07:42.460 "name": "BaseBdev2", 00:07:42.460 "aliases": [ 00:07:42.460 "6c75bb0a-3866-460c-a7a8-8dbfe9d28e32" 00:07:42.460 ], 00:07:42.460 "product_name": "Malloc disk", 00:07:42.460 "block_size": 512, 00:07:42.460 "num_blocks": 65536, 00:07:42.460 "uuid": "6c75bb0a-3866-460c-a7a8-8dbfe9d28e32", 00:07:42.460 "assigned_rate_limits": { 00:07:42.460 "rw_ios_per_sec": 0, 00:07:42.460 "rw_mbytes_per_sec": 0, 00:07:42.460 "r_mbytes_per_sec": 0, 00:07:42.460 "w_mbytes_per_sec": 0 00:07:42.460 }, 00:07:42.460 "claimed": true, 00:07:42.460 "claim_type": "exclusive_write", 00:07:42.460 "zoned": false, 00:07:42.460 "supported_io_types": { 00:07:42.460 "read": true, 00:07:42.460 "write": true, 00:07:42.460 "unmap": true, 00:07:42.460 "flush": true, 00:07:42.460 "reset": true, 00:07:42.460 "nvme_admin": false, 00:07:42.460 "nvme_io": false, 00:07:42.460 "nvme_io_md": false, 00:07:42.460 "write_zeroes": true, 00:07:42.460 "zcopy": true, 00:07:42.460 "get_zone_info": false, 00:07:42.460 "zone_management": false, 00:07:42.460 "zone_append": false, 00:07:42.460 "compare": false, 00:07:42.460 "compare_and_write": false, 00:07:42.460 "abort": true, 00:07:42.460 "seek_hole": false, 00:07:42.460 "seek_data": false, 00:07:42.460 "copy": true, 00:07:42.460 "nvme_iov_md": false 00:07:42.460 }, 00:07:42.460 "memory_domains": [ 00:07:42.460 { 00:07:42.460 "dma_device_id": "system", 00:07:42.460 "dma_device_type": 1 00:07:42.460 }, 00:07:42.460 { 00:07:42.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:42.460 "dma_device_type": 2 00:07:42.460 } 00:07:42.460 ], 00:07:42.460 "driver_specific": {} 00:07:42.460 } 00:07:42.460 ] 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:42.460 "name": "Existed_Raid", 00:07:42.460 "uuid": "28b52644-c7f7-4727-bf92-e755e749c396", 00:07:42.460 "strip_size_kb": 64, 00:07:42.460 "state": "online", 00:07:42.460 "raid_level": "raid0", 00:07:42.460 "superblock": true, 00:07:42.460 "num_base_bdevs": 2, 00:07:42.460 "num_base_bdevs_discovered": 2, 00:07:42.460 "num_base_bdevs_operational": 2, 00:07:42.460 "base_bdevs_list": [ 00:07:42.460 { 00:07:42.460 "name": "BaseBdev1", 00:07:42.460 "uuid": "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d", 00:07:42.460 "is_configured": true, 00:07:42.460 "data_offset": 2048, 00:07:42.460 "data_size": 63488 00:07:42.460 }, 00:07:42.460 { 00:07:42.460 "name": "BaseBdev2", 00:07:42.460 "uuid": "6c75bb0a-3866-460c-a7a8-8dbfe9d28e32", 00:07:42.460 "is_configured": true, 00:07:42.460 "data_offset": 2048, 00:07:42.460 "data_size": 63488 00:07:42.460 } 00:07:42.460 ] 00:07:42.460 }' 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:42.460 05:45:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.029 [2024-12-12 05:45:50.336650] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:43.029 "name": "Existed_Raid", 00:07:43.029 "aliases": [ 00:07:43.029 "28b52644-c7f7-4727-bf92-e755e749c396" 00:07:43.029 ], 00:07:43.029 "product_name": "Raid Volume", 00:07:43.029 "block_size": 512, 00:07:43.029 "num_blocks": 126976, 00:07:43.029 "uuid": "28b52644-c7f7-4727-bf92-e755e749c396", 00:07:43.029 "assigned_rate_limits": { 00:07:43.029 "rw_ios_per_sec": 0, 00:07:43.029 "rw_mbytes_per_sec": 0, 00:07:43.029 "r_mbytes_per_sec": 0, 00:07:43.029 "w_mbytes_per_sec": 0 00:07:43.029 }, 00:07:43.029 "claimed": false, 00:07:43.029 "zoned": false, 00:07:43.029 "supported_io_types": { 00:07:43.029 "read": true, 00:07:43.029 "write": true, 00:07:43.029 "unmap": true, 00:07:43.029 "flush": true, 00:07:43.029 "reset": true, 00:07:43.029 "nvme_admin": false, 00:07:43.029 "nvme_io": false, 00:07:43.029 "nvme_io_md": false, 00:07:43.029 "write_zeroes": true, 00:07:43.029 "zcopy": false, 00:07:43.029 "get_zone_info": false, 00:07:43.029 "zone_management": false, 00:07:43.029 "zone_append": false, 00:07:43.029 "compare": false, 00:07:43.029 "compare_and_write": false, 00:07:43.029 "abort": false, 00:07:43.029 "seek_hole": false, 00:07:43.029 "seek_data": false, 00:07:43.029 "copy": false, 00:07:43.029 "nvme_iov_md": false 00:07:43.029 }, 00:07:43.029 "memory_domains": [ 00:07:43.029 { 00:07:43.029 "dma_device_id": "system", 00:07:43.029 "dma_device_type": 1 00:07:43.029 }, 00:07:43.029 { 00:07:43.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.029 "dma_device_type": 2 00:07:43.029 }, 00:07:43.029 { 00:07:43.029 "dma_device_id": "system", 00:07:43.029 "dma_device_type": 1 00:07:43.029 }, 00:07:43.029 { 00:07:43.029 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:43.029 "dma_device_type": 2 00:07:43.029 } 00:07:43.029 ], 00:07:43.029 "driver_specific": { 00:07:43.029 "raid": { 00:07:43.029 "uuid": "28b52644-c7f7-4727-bf92-e755e749c396", 00:07:43.029 "strip_size_kb": 64, 00:07:43.029 "state": "online", 00:07:43.029 "raid_level": "raid0", 00:07:43.029 "superblock": true, 00:07:43.029 "num_base_bdevs": 2, 00:07:43.029 "num_base_bdevs_discovered": 2, 00:07:43.029 "num_base_bdevs_operational": 2, 00:07:43.029 "base_bdevs_list": [ 00:07:43.029 { 00:07:43.029 "name": "BaseBdev1", 00:07:43.029 "uuid": "d1a8e9ae-e995-4d82-928a-8a3fbe385f5d", 00:07:43.029 "is_configured": true, 00:07:43.029 "data_offset": 2048, 00:07:43.029 "data_size": 63488 00:07:43.029 }, 00:07:43.029 { 00:07:43.029 "name": "BaseBdev2", 00:07:43.029 "uuid": "6c75bb0a-3866-460c-a7a8-8dbfe9d28e32", 00:07:43.029 "is_configured": true, 00:07:43.029 "data_offset": 2048, 00:07:43.029 "data_size": 63488 00:07:43.029 } 00:07:43.029 ] 00:07:43.029 } 00:07:43.029 } 00:07:43.029 }' 00:07:43.029 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:43.030 BaseBdev2' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:43.030 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.290 [2024-12-12 05:45:50.552061] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:43.290 [2024-12-12 05:45:50.552096] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:43.290 [2024-12-12 05:45:50.552143] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:43.290 "name": "Existed_Raid", 00:07:43.290 "uuid": "28b52644-c7f7-4727-bf92-e755e749c396", 00:07:43.290 "strip_size_kb": 64, 00:07:43.290 "state": "offline", 00:07:43.290 "raid_level": "raid0", 00:07:43.290 "superblock": true, 00:07:43.290 "num_base_bdevs": 2, 00:07:43.290 "num_base_bdevs_discovered": 1, 00:07:43.290 "num_base_bdevs_operational": 1, 00:07:43.290 "base_bdevs_list": [ 00:07:43.290 { 00:07:43.290 "name": null, 00:07:43.290 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:43.290 "is_configured": false, 00:07:43.290 "data_offset": 0, 00:07:43.290 "data_size": 63488 00:07:43.290 }, 00:07:43.290 { 00:07:43.290 "name": "BaseBdev2", 00:07:43.290 "uuid": "6c75bb0a-3866-460c-a7a8-8dbfe9d28e32", 00:07:43.290 "is_configured": true, 00:07:43.290 "data_offset": 2048, 00:07:43.290 "data_size": 63488 00:07:43.290 } 00:07:43.290 ] 00:07:43.290 }' 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:43.290 05:45:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.859 [2024-12-12 05:45:51.142321] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:43.859 [2024-12-12 05:45:51.142380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61952 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61952 ']' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61952 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61952 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.859 killing process with pid 61952 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61952' 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61952 00:07:43.859 [2024-12-12 05:45:51.343042] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:43.859 05:45:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61952 00:07:43.859 [2024-12-12 05:45:51.359126] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:45.239 05:45:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:07:45.239 00:07:45.239 real 0m4.926s 00:07:45.239 user 0m7.141s 00:07:45.239 sys 0m0.808s 00:07:45.239 05:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.239 05:45:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:45.239 ************************************ 00:07:45.239 END TEST raid_state_function_test_sb 00:07:45.239 ************************************ 00:07:45.239 05:45:52 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:45.239 05:45:52 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:45.239 05:45:52 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.239 05:45:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:45.239 ************************************ 00:07:45.239 START TEST raid_superblock_test 00:07:45.239 ************************************ 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62204 00:07:45.239 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62204 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62204 ']' 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.240 05:45:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.240 [2024-12-12 05:45:52.595339] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:45.240 [2024-12-12 05:45:52.595473] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62204 ] 00:07:45.499 [2024-12-12 05:45:52.766801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.499 [2024-12-12 05:45:52.874951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.759 [2024-12-12 05:45:53.069325] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.759 [2024-12-12 05:45:53.069390] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 malloc1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 [2024-12-12 05:45:53.454976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:46.019 [2024-12-12 05:45:53.455037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.019 [2024-12-12 05:45:53.455075] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:46.019 [2024-12-12 05:45:53.455083] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.019 [2024-12-12 05:45:53.457136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.019 [2024-12-12 05:45:53.457239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:46.019 pt1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 malloc2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 [2024-12-12 05:45:53.509235] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:46.019 [2024-12-12 05:45:53.509339] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.019 [2024-12-12 05:45:53.509380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:07:46.019 [2024-12-12 05:45:53.509408] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.019 [2024-12-12 05:45:53.511926] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.019 [2024-12-12 05:45:53.511994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:46.019 pt2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.019 [2024-12-12 05:45:53.521266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:46.019 [2024-12-12 05:45:53.523020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:46.019 [2024-12-12 05:45:53.523208] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:46.019 [2024-12-12 05:45:53.523255] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:46.019 [2024-12-12 05:45:53.523561] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:46.019 [2024-12-12 05:45:53.523756] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:46.019 [2024-12-12 05:45:53.523800] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:07:46.019 [2024-12-12 05:45:53.524000] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:46.019 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.279 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.279 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:46.279 "name": "raid_bdev1", 00:07:46.279 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:46.279 "strip_size_kb": 64, 00:07:46.279 "state": "online", 00:07:46.279 "raid_level": "raid0", 00:07:46.279 "superblock": true, 00:07:46.279 "num_base_bdevs": 2, 00:07:46.279 "num_base_bdevs_discovered": 2, 00:07:46.279 "num_base_bdevs_operational": 2, 00:07:46.279 "base_bdevs_list": [ 00:07:46.279 { 00:07:46.279 "name": "pt1", 00:07:46.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.279 "is_configured": true, 00:07:46.279 "data_offset": 2048, 00:07:46.279 "data_size": 63488 00:07:46.279 }, 00:07:46.279 { 00:07:46.279 "name": "pt2", 00:07:46.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.279 "is_configured": true, 00:07:46.279 "data_offset": 2048, 00:07:46.279 "data_size": 63488 00:07:46.280 } 00:07:46.280 ] 00:07:46.280 }' 00:07:46.280 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:46.280 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.540 05:45:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.540 [2024-12-12 05:45:53.984769] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.540 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.540 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:46.540 "name": "raid_bdev1", 00:07:46.540 "aliases": [ 00:07:46.540 "79977dc0-dd68-4448-88dd-7da8a88cd84c" 00:07:46.540 ], 00:07:46.540 "product_name": "Raid Volume", 00:07:46.540 "block_size": 512, 00:07:46.540 "num_blocks": 126976, 00:07:46.540 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:46.540 "assigned_rate_limits": { 00:07:46.540 "rw_ios_per_sec": 0, 00:07:46.540 "rw_mbytes_per_sec": 0, 00:07:46.540 "r_mbytes_per_sec": 0, 00:07:46.540 "w_mbytes_per_sec": 0 00:07:46.540 }, 00:07:46.540 "claimed": false, 00:07:46.540 "zoned": false, 00:07:46.540 "supported_io_types": { 00:07:46.540 "read": true, 00:07:46.540 "write": true, 00:07:46.540 "unmap": true, 00:07:46.540 "flush": true, 00:07:46.540 "reset": true, 00:07:46.540 "nvme_admin": false, 00:07:46.540 "nvme_io": false, 00:07:46.540 "nvme_io_md": false, 00:07:46.540 "write_zeroes": true, 00:07:46.540 "zcopy": false, 00:07:46.540 "get_zone_info": false, 00:07:46.540 "zone_management": false, 00:07:46.540 "zone_append": false, 00:07:46.540 "compare": false, 00:07:46.540 "compare_and_write": false, 00:07:46.540 "abort": false, 00:07:46.540 "seek_hole": false, 00:07:46.540 "seek_data": false, 00:07:46.540 "copy": false, 00:07:46.540 "nvme_iov_md": false 00:07:46.540 }, 00:07:46.540 "memory_domains": [ 00:07:46.540 { 00:07:46.540 "dma_device_id": "system", 00:07:46.540 "dma_device_type": 1 00:07:46.540 }, 00:07:46.540 { 00:07:46.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.540 "dma_device_type": 2 00:07:46.540 }, 00:07:46.540 { 00:07:46.540 "dma_device_id": "system", 00:07:46.540 "dma_device_type": 1 00:07:46.540 }, 00:07:46.540 { 00:07:46.540 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.540 "dma_device_type": 2 00:07:46.540 } 00:07:46.540 ], 00:07:46.540 "driver_specific": { 00:07:46.540 "raid": { 00:07:46.540 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:46.540 "strip_size_kb": 64, 00:07:46.540 "state": "online", 00:07:46.540 "raid_level": "raid0", 00:07:46.540 "superblock": true, 00:07:46.540 "num_base_bdevs": 2, 00:07:46.540 "num_base_bdevs_discovered": 2, 00:07:46.540 "num_base_bdevs_operational": 2, 00:07:46.540 "base_bdevs_list": [ 00:07:46.540 { 00:07:46.540 "name": "pt1", 00:07:46.540 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:46.540 "is_configured": true, 00:07:46.540 "data_offset": 2048, 00:07:46.540 "data_size": 63488 00:07:46.540 }, 00:07:46.540 { 00:07:46.540 "name": "pt2", 00:07:46.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:46.540 "is_configured": true, 00:07:46.540 "data_offset": 2048, 00:07:46.540 "data_size": 63488 00:07:46.540 } 00:07:46.540 ] 00:07:46.540 } 00:07:46.540 } 00:07:46.540 }' 00:07:46.540 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:46.800 pt2' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.800 [2024-12-12 05:45:54.208293] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=79977dc0-dd68-4448-88dd-7da8a88cd84c 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 79977dc0-dd68-4448-88dd-7da8a88cd84c ']' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.800 [2024-12-12 05:45:54.251955] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:46.800 [2024-12-12 05:45:54.251977] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:46.800 [2024-12-12 05:45:54.252049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:46.800 [2024-12-12 05:45:54.252093] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:46.800 [2024-12-12 05:45:54.252106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:46.800 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:46.801 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 [2024-12-12 05:45:54.387769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:47.061 [2024-12-12 05:45:54.389760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:47.061 [2024-12-12 05:45:54.389822] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:47.061 [2024-12-12 05:45:54.389884] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:47.061 [2024-12-12 05:45:54.389899] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:47.061 [2024-12-12 05:45:54.389911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:07:47.061 request: 00:07:47.061 { 00:07:47.061 "name": "raid_bdev1", 00:07:47.061 "raid_level": "raid0", 00:07:47.061 "base_bdevs": [ 00:07:47.061 "malloc1", 00:07:47.061 "malloc2" 00:07:47.061 ], 00:07:47.061 "strip_size_kb": 64, 00:07:47.061 "superblock": false, 00:07:47.061 "method": "bdev_raid_create", 00:07:47.061 "req_id": 1 00:07:47.061 } 00:07:47.061 Got JSON-RPC error response 00:07:47.061 response: 00:07:47.061 { 00:07:47.061 "code": -17, 00:07:47.061 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:47.061 } 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 [2024-12-12 05:45:54.439669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:47.061 [2024-12-12 05:45:54.439760] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.061 [2024-12-12 05:45:54.439793] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:07:47.061 [2024-12-12 05:45:54.439842] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.061 [2024-12-12 05:45:54.441999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.061 [2024-12-12 05:45:54.442066] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:47.061 [2024-12-12 05:45:54.442191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:47.061 [2024-12-12 05:45:54.442283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:47.061 pt1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.061 "name": "raid_bdev1", 00:07:47.061 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:47.061 "strip_size_kb": 64, 00:07:47.061 "state": "configuring", 00:07:47.061 "raid_level": "raid0", 00:07:47.061 "superblock": true, 00:07:47.061 "num_base_bdevs": 2, 00:07:47.061 "num_base_bdevs_discovered": 1, 00:07:47.061 "num_base_bdevs_operational": 2, 00:07:47.061 "base_bdevs_list": [ 00:07:47.061 { 00:07:47.061 "name": "pt1", 00:07:47.061 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.061 "is_configured": true, 00:07:47.061 "data_offset": 2048, 00:07:47.061 "data_size": 63488 00:07:47.061 }, 00:07:47.061 { 00:07:47.061 "name": null, 00:07:47.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.061 "is_configured": false, 00:07:47.061 "data_offset": 2048, 00:07:47.061 "data_size": 63488 00:07:47.061 } 00:07:47.061 ] 00:07:47.061 }' 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.061 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.659 [2024-12-12 05:45:54.850961] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:47.659 [2024-12-12 05:45:54.851021] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.659 [2024-12-12 05:45:54.851041] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:07:47.659 [2024-12-12 05:45:54.851052] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.659 [2024-12-12 05:45:54.851476] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.659 [2024-12-12 05:45:54.851518] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:47.659 [2024-12-12 05:45:54.851592] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:47.659 [2024-12-12 05:45:54.851617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:47.659 [2024-12-12 05:45:54.851720] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:47.659 [2024-12-12 05:45:54.851731] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.659 [2024-12-12 05:45:54.851972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:07:47.659 [2024-12-12 05:45:54.852127] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:47.659 [2024-12-12 05:45:54.852136] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:47.659 [2024-12-12 05:45:54.852265] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.659 pt2 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:47.659 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:47.660 "name": "raid_bdev1", 00:07:47.660 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:47.660 "strip_size_kb": 64, 00:07:47.660 "state": "online", 00:07:47.660 "raid_level": "raid0", 00:07:47.660 "superblock": true, 00:07:47.660 "num_base_bdevs": 2, 00:07:47.660 "num_base_bdevs_discovered": 2, 00:07:47.660 "num_base_bdevs_operational": 2, 00:07:47.660 "base_bdevs_list": [ 00:07:47.660 { 00:07:47.660 "name": "pt1", 00:07:47.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.660 "is_configured": true, 00:07:47.660 "data_offset": 2048, 00:07:47.660 "data_size": 63488 00:07:47.660 }, 00:07:47.660 { 00:07:47.660 "name": "pt2", 00:07:47.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.660 "is_configured": true, 00:07:47.660 "data_offset": 2048, 00:07:47.660 "data_size": 63488 00:07:47.660 } 00:07:47.660 ] 00:07:47.660 }' 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:47.660 05:45:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.928 [2024-12-12 05:45:55.294466] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.928 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:47.928 "name": "raid_bdev1", 00:07:47.928 "aliases": [ 00:07:47.928 "79977dc0-dd68-4448-88dd-7da8a88cd84c" 00:07:47.928 ], 00:07:47.928 "product_name": "Raid Volume", 00:07:47.928 "block_size": 512, 00:07:47.928 "num_blocks": 126976, 00:07:47.928 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:47.928 "assigned_rate_limits": { 00:07:47.928 "rw_ios_per_sec": 0, 00:07:47.928 "rw_mbytes_per_sec": 0, 00:07:47.928 "r_mbytes_per_sec": 0, 00:07:47.928 "w_mbytes_per_sec": 0 00:07:47.928 }, 00:07:47.928 "claimed": false, 00:07:47.928 "zoned": false, 00:07:47.928 "supported_io_types": { 00:07:47.928 "read": true, 00:07:47.928 "write": true, 00:07:47.928 "unmap": true, 00:07:47.928 "flush": true, 00:07:47.928 "reset": true, 00:07:47.928 "nvme_admin": false, 00:07:47.928 "nvme_io": false, 00:07:47.928 "nvme_io_md": false, 00:07:47.928 "write_zeroes": true, 00:07:47.928 "zcopy": false, 00:07:47.928 "get_zone_info": false, 00:07:47.928 "zone_management": false, 00:07:47.928 "zone_append": false, 00:07:47.928 "compare": false, 00:07:47.928 "compare_and_write": false, 00:07:47.928 "abort": false, 00:07:47.928 "seek_hole": false, 00:07:47.928 "seek_data": false, 00:07:47.928 "copy": false, 00:07:47.928 "nvme_iov_md": false 00:07:47.928 }, 00:07:47.928 "memory_domains": [ 00:07:47.928 { 00:07:47.928 "dma_device_id": "system", 00:07:47.928 "dma_device_type": 1 00:07:47.928 }, 00:07:47.928 { 00:07:47.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.928 "dma_device_type": 2 00:07:47.928 }, 00:07:47.928 { 00:07:47.928 "dma_device_id": "system", 00:07:47.928 "dma_device_type": 1 00:07:47.928 }, 00:07:47.928 { 00:07:47.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.928 "dma_device_type": 2 00:07:47.928 } 00:07:47.928 ], 00:07:47.928 "driver_specific": { 00:07:47.928 "raid": { 00:07:47.928 "uuid": "79977dc0-dd68-4448-88dd-7da8a88cd84c", 00:07:47.928 "strip_size_kb": 64, 00:07:47.928 "state": "online", 00:07:47.928 "raid_level": "raid0", 00:07:47.928 "superblock": true, 00:07:47.928 "num_base_bdevs": 2, 00:07:47.928 "num_base_bdevs_discovered": 2, 00:07:47.928 "num_base_bdevs_operational": 2, 00:07:47.928 "base_bdevs_list": [ 00:07:47.928 { 00:07:47.928 "name": "pt1", 00:07:47.928 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:47.928 "is_configured": true, 00:07:47.928 "data_offset": 2048, 00:07:47.928 "data_size": 63488 00:07:47.928 }, 00:07:47.928 { 00:07:47.928 "name": "pt2", 00:07:47.929 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:47.929 "is_configured": true, 00:07:47.929 "data_offset": 2048, 00:07:47.929 "data_size": 63488 00:07:47.929 } 00:07:47.929 ] 00:07:47.929 } 00:07:47.929 } 00:07:47.929 }' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:07:47.929 pt2' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.929 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.193 [2024-12-12 05:45:55.514040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 79977dc0-dd68-4448-88dd-7da8a88cd84c '!=' 79977dc0-dd68-4448-88dd-7da8a88cd84c ']' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62204 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62204 ']' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62204 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62204 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62204' 00:07:48.193 killing process with pid 62204 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62204 00:07:48.193 [2024-12-12 05:45:55.586409] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.193 [2024-12-12 05:45:55.586559] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.193 [2024-12-12 05:45:55.586638] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.193 [2024-12-12 05:45:55.586690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:48.193 05:45:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62204 00:07:48.453 [2024-12-12 05:45:55.783091] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.403 05:45:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:07:49.403 00:07:49.403 real 0m4.348s 00:07:49.403 user 0m6.127s 00:07:49.403 sys 0m0.723s 00:07:49.403 05:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.403 05:45:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.403 ************************************ 00:07:49.403 END TEST raid_superblock_test 00:07:49.403 ************************************ 00:07:49.403 05:45:56 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:49.403 05:45:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:49.403 05:45:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.403 05:45:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.403 ************************************ 00:07:49.403 START TEST raid_read_error_test 00:07:49.403 ************************************ 00:07:49.403 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:07:49.403 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:49.403 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:49.403 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.662 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.lQ1vBXAmzQ 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62410 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62410 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62410 ']' 00:07:49.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.663 05:45:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.663 [2024-12-12 05:45:57.025752] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:49.663 [2024-12-12 05:45:57.025870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62410 ] 00:07:49.663 [2024-12-12 05:45:57.183071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.922 [2024-12-12 05:45:57.288713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.182 [2024-12-12 05:45:57.475544] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.182 [2024-12-12 05:45:57.475601] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.441 BaseBdev1_malloc 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.441 true 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.441 [2024-12-12 05:45:57.897940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:50.441 [2024-12-12 05:45:57.897994] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.441 [2024-12-12 05:45:57.898014] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:50.441 [2024-12-12 05:45:57.898025] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.441 [2024-12-12 05:45:57.900044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.441 [2024-12-12 05:45:57.900084] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:50.441 BaseBdev1 00:07:50.441 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.442 BaseBdev2_malloc 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.442 true 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.442 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.701 [2024-12-12 05:45:57.963811] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:50.701 [2024-12-12 05:45:57.963913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.701 [2024-12-12 05:45:57.963934] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:50.701 [2024-12-12 05:45:57.963944] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.701 [2024-12-12 05:45:57.965997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.701 [2024-12-12 05:45:57.966037] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:50.701 BaseBdev2 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.701 [2024-12-12 05:45:57.975856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:50.701 [2024-12-12 05:45:57.977540] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:50.701 [2024-12-12 05:45:57.977731] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:50.701 [2024-12-12 05:45:57.977754] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:50.701 [2024-12-12 05:45:57.977999] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:50.701 [2024-12-12 05:45:57.978156] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:50.701 [2024-12-12 05:45:57.978168] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:50.701 [2024-12-12 05:45:57.978314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.701 05:45:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.701 05:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.701 05:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.701 "name": "raid_bdev1", 00:07:50.701 "uuid": "d3e57734-18d5-4671-a4b2-abd4f0896dba", 00:07:50.701 "strip_size_kb": 64, 00:07:50.701 "state": "online", 00:07:50.701 "raid_level": "raid0", 00:07:50.701 "superblock": true, 00:07:50.701 "num_base_bdevs": 2, 00:07:50.701 "num_base_bdevs_discovered": 2, 00:07:50.701 "num_base_bdevs_operational": 2, 00:07:50.701 "base_bdevs_list": [ 00:07:50.701 { 00:07:50.701 "name": "BaseBdev1", 00:07:50.701 "uuid": "5bca874f-2f84-5262-9ae4-e04a8d491dd1", 00:07:50.701 "is_configured": true, 00:07:50.701 "data_offset": 2048, 00:07:50.701 "data_size": 63488 00:07:50.701 }, 00:07:50.701 { 00:07:50.701 "name": "BaseBdev2", 00:07:50.701 "uuid": "6c0b4bc3-8bc8-5b1b-b09b-fd3818d5fdd3", 00:07:50.701 "is_configured": true, 00:07:50.701 "data_offset": 2048, 00:07:50.701 "data_size": 63488 00:07:50.701 } 00:07:50.701 ] 00:07:50.701 }' 00:07:50.701 05:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.701 05:45:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.961 05:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:50.961 05:45:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:51.220 [2024-12-12 05:45:58.508210] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:52.157 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:52.157 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.158 "name": "raid_bdev1", 00:07:52.158 "uuid": "d3e57734-18d5-4671-a4b2-abd4f0896dba", 00:07:52.158 "strip_size_kb": 64, 00:07:52.158 "state": "online", 00:07:52.158 "raid_level": "raid0", 00:07:52.158 "superblock": true, 00:07:52.158 "num_base_bdevs": 2, 00:07:52.158 "num_base_bdevs_discovered": 2, 00:07:52.158 "num_base_bdevs_operational": 2, 00:07:52.158 "base_bdevs_list": [ 00:07:52.158 { 00:07:52.158 "name": "BaseBdev1", 00:07:52.158 "uuid": "5bca874f-2f84-5262-9ae4-e04a8d491dd1", 00:07:52.158 "is_configured": true, 00:07:52.158 "data_offset": 2048, 00:07:52.158 "data_size": 63488 00:07:52.158 }, 00:07:52.158 { 00:07:52.158 "name": "BaseBdev2", 00:07:52.158 "uuid": "6c0b4bc3-8bc8-5b1b-b09b-fd3818d5fdd3", 00:07:52.158 "is_configured": true, 00:07:52.158 "data_offset": 2048, 00:07:52.158 "data_size": 63488 00:07:52.158 } 00:07:52.158 ] 00:07:52.158 }' 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.158 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.417 [2024-12-12 05:45:59.903982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:52.417 [2024-12-12 05:45:59.904018] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.417 [2024-12-12 05:45:59.906748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.417 [2024-12-12 05:45:59.906847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:52.417 [2024-12-12 05:45:59.906901] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:52.417 [2024-12-12 05:45:59.906913] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:52.417 { 00:07:52.417 "results": [ 00:07:52.417 { 00:07:52.417 "job": "raid_bdev1", 00:07:52.417 "core_mask": "0x1", 00:07:52.417 "workload": "randrw", 00:07:52.417 "percentage": 50, 00:07:52.417 "status": "finished", 00:07:52.417 "queue_depth": 1, 00:07:52.417 "io_size": 131072, 00:07:52.417 "runtime": 1.396758, 00:07:52.417 "iops": 16690.79396717255, 00:07:52.417 "mibps": 2086.349245896569, 00:07:52.417 "io_failed": 1, 00:07:52.417 "io_timeout": 0, 00:07:52.417 "avg_latency_us": 82.87526410841473, 00:07:52.417 "min_latency_us": 25.2646288209607, 00:07:52.417 "max_latency_us": 1395.1441048034935 00:07:52.417 } 00:07:52.417 ], 00:07:52.417 "core_count": 1 00:07:52.417 } 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62410 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62410 ']' 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62410 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:52.417 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62410 00:07:52.676 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:52.676 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:52.676 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62410' 00:07:52.676 killing process with pid 62410 00:07:52.676 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62410 00:07:52.676 [2024-12-12 05:45:59.954643] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:52.676 05:45:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62410 00:07:52.676 [2024-12-12 05:46:00.082383] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.lQ1vBXAmzQ 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:54.055 ************************************ 00:07:54.055 END TEST raid_read_error_test 00:07:54.055 ************************************ 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:07:54.055 00:07:54.055 real 0m4.281s 00:07:54.055 user 0m5.156s 00:07:54.055 sys 0m0.527s 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:54.055 05:46:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 05:46:01 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:54.055 05:46:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:54.055 05:46:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:54.055 05:46:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 ************************************ 00:07:54.055 START TEST raid_write_error_test 00:07:54.055 ************************************ 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OVym3HqJZU 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62556 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62556 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62556 ']' 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.055 05:46:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.055 [2024-12-12 05:46:01.381128] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:54.055 [2024-12-12 05:46:01.381230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62556 ] 00:07:54.055 [2024-12-12 05:46:01.553974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.314 [2024-12-12 05:46:01.662355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.574 [2024-12-12 05:46:01.857130] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.574 [2024-12-12 05:46:01.857159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.833 BaseBdev1_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.833 true 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.833 [2024-12-12 05:46:02.258624] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:54.833 [2024-12-12 05:46:02.258681] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.833 [2024-12-12 05:46:02.258700] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:07:54.833 [2024-12-12 05:46:02.258711] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.833 [2024-12-12 05:46:02.260728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.833 [2024-12-12 05:46:02.260844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:54.833 BaseBdev1 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.833 BaseBdev2_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.833 true 00:07:54.833 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.834 [2024-12-12 05:46:02.325352] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:54.834 [2024-12-12 05:46:02.325408] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.834 [2024-12-12 05:46:02.325440] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:07:54.834 [2024-12-12 05:46:02.325451] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.834 [2024-12-12 05:46:02.327522] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.834 [2024-12-12 05:46:02.327559] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:54.834 BaseBdev2 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.834 [2024-12-12 05:46:02.337400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.834 [2024-12-12 05:46:02.339249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:54.834 [2024-12-12 05:46:02.339430] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:54.834 [2024-12-12 05:46:02.339448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.834 [2024-12-12 05:46:02.339691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:07:54.834 [2024-12-12 05:46:02.339860] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:54.834 [2024-12-12 05:46:02.339879] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:07:54.834 [2024-12-12 05:46:02.340061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.834 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:55.093 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.093 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.093 "name": "raid_bdev1", 00:07:55.093 "uuid": "2ce11fef-ba46-45ff-99e9-8b783d1f9eeb", 00:07:55.093 "strip_size_kb": 64, 00:07:55.093 "state": "online", 00:07:55.093 "raid_level": "raid0", 00:07:55.093 "superblock": true, 00:07:55.093 "num_base_bdevs": 2, 00:07:55.093 "num_base_bdevs_discovered": 2, 00:07:55.093 "num_base_bdevs_operational": 2, 00:07:55.093 "base_bdevs_list": [ 00:07:55.093 { 00:07:55.093 "name": "BaseBdev1", 00:07:55.093 "uuid": "9d0057a4-ebc2-59f6-8690-01a4c2a67667", 00:07:55.093 "is_configured": true, 00:07:55.093 "data_offset": 2048, 00:07:55.093 "data_size": 63488 00:07:55.093 }, 00:07:55.093 { 00:07:55.093 "name": "BaseBdev2", 00:07:55.093 "uuid": "f8db9be0-484c-5f6f-9b4d-9e70c5d949a1", 00:07:55.093 "is_configured": true, 00:07:55.093 "data_offset": 2048, 00:07:55.093 "data_size": 63488 00:07:55.093 } 00:07:55.093 ] 00:07:55.093 }' 00:07:55.093 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.093 05:46:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.374 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:55.374 05:46:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:07:55.374 [2024-12-12 05:46:02.861891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:07:56.349 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.350 "name": "raid_bdev1", 00:07:56.350 "uuid": "2ce11fef-ba46-45ff-99e9-8b783d1f9eeb", 00:07:56.350 "strip_size_kb": 64, 00:07:56.350 "state": "online", 00:07:56.350 "raid_level": "raid0", 00:07:56.350 "superblock": true, 00:07:56.350 "num_base_bdevs": 2, 00:07:56.350 "num_base_bdevs_discovered": 2, 00:07:56.350 "num_base_bdevs_operational": 2, 00:07:56.350 "base_bdevs_list": [ 00:07:56.350 { 00:07:56.350 "name": "BaseBdev1", 00:07:56.350 "uuid": "9d0057a4-ebc2-59f6-8690-01a4c2a67667", 00:07:56.350 "is_configured": true, 00:07:56.350 "data_offset": 2048, 00:07:56.350 "data_size": 63488 00:07:56.350 }, 00:07:56.350 { 00:07:56.350 "name": "BaseBdev2", 00:07:56.350 "uuid": "f8db9be0-484c-5f6f-9b4d-9e70c5d949a1", 00:07:56.350 "is_configured": true, 00:07:56.350 "data_offset": 2048, 00:07:56.350 "data_size": 63488 00:07:56.350 } 00:07:56.350 ] 00:07:56.350 }' 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.350 05:46:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 05:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:07:56.919 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.919 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.919 [2024-12-12 05:46:04.225591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.920 [2024-12-12 05:46:04.225684] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.920 [2024-12-12 05:46:04.228648] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.920 [2024-12-12 05:46:04.228735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:56.920 [2024-12-12 05:46:04.228813] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.920 [2024-12-12 05:46:04.228860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:07:56.920 { 00:07:56.920 "results": [ 00:07:56.920 { 00:07:56.920 "job": "raid_bdev1", 00:07:56.920 "core_mask": "0x1", 00:07:56.920 "workload": "randrw", 00:07:56.920 "percentage": 50, 00:07:56.920 "status": "finished", 00:07:56.920 "queue_depth": 1, 00:07:56.920 "io_size": 131072, 00:07:56.920 "runtime": 1.364767, 00:07:56.920 "iops": 16389.61082734269, 00:07:56.920 "mibps": 2048.7013534178363, 00:07:56.920 "io_failed": 1, 00:07:56.920 "io_timeout": 0, 00:07:56.920 "avg_latency_us": 84.46435614165814, 00:07:56.920 "min_latency_us": 25.041048034934498, 00:07:56.920 "max_latency_us": 1445.2262008733624 00:07:56.920 } 00:07:56.920 ], 00:07:56.920 "core_count": 1 00:07:56.920 } 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62556 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62556 ']' 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62556 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62556 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:56.920 killing process with pid 62556 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62556' 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62556 00:07:56.920 [2024-12-12 05:46:04.278258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:56.920 05:46:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62556 00:07:56.920 [2024-12-12 05:46:04.412254] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OVym3HqJZU 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:07:58.300 ************************************ 00:07:58.300 END TEST raid_write_error_test 00:07:58.300 ************************************ 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:07:58.300 00:07:58.300 real 0m4.274s 00:07:58.300 user 0m5.131s 00:07:58.300 sys 0m0.517s 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.300 05:46:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 05:46:05 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:58.300 05:46:05 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:58.300 05:46:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:58.300 05:46:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.300 05:46:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 ************************************ 00:07:58.300 START TEST raid_state_function_test 00:07:58.300 ************************************ 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62694 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62694' 00:07:58.300 Process raid pid: 62694 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62694 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62694 ']' 00:07:58.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:58.300 05:46:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.300 [2024-12-12 05:46:05.719181] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:07:58.300 [2024-12-12 05:46:05.719295] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.560 [2024-12-12 05:46:05.891301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.560 [2024-12-12 05:46:05.999376] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.820 [2024-12-12 05:46:06.197389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:58.820 [2024-12-12 05:46:06.197430] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 [2024-12-12 05:46:06.537468] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.080 [2024-12-12 05:46:06.537593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.080 [2024-12-12 05:46:06.537621] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.080 [2024-12-12 05:46:06.537632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.080 "name": "Existed_Raid", 00:07:59.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.080 "strip_size_kb": 64, 00:07:59.080 "state": "configuring", 00:07:59.080 "raid_level": "concat", 00:07:59.080 "superblock": false, 00:07:59.080 "num_base_bdevs": 2, 00:07:59.080 "num_base_bdevs_discovered": 0, 00:07:59.080 "num_base_bdevs_operational": 2, 00:07:59.080 "base_bdevs_list": [ 00:07:59.080 { 00:07:59.080 "name": "BaseBdev1", 00:07:59.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.080 "is_configured": false, 00:07:59.080 "data_offset": 0, 00:07:59.080 "data_size": 0 00:07:59.080 }, 00:07:59.080 { 00:07:59.080 "name": "BaseBdev2", 00:07:59.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.080 "is_configured": false, 00:07:59.080 "data_offset": 0, 00:07:59.080 "data_size": 0 00:07:59.080 } 00:07:59.080 ] 00:07:59.080 }' 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.080 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.650 [2024-12-12 05:46:06.992637] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:59.650 [2024-12-12 05:46:06.992716] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.650 05:46:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.651 [2024-12-12 05:46:07.004626] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:59.651 [2024-12-12 05:46:07.004699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:59.651 [2024-12-12 05:46:07.004724] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:59.651 [2024-12-12 05:46:07.004747] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.651 [2024-12-12 05:46:07.050318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:59.651 BaseBdev1 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.651 [ 00:07:59.651 { 00:07:59.651 "name": "BaseBdev1", 00:07:59.651 "aliases": [ 00:07:59.651 "11ba5447-486e-4067-996a-5290532b8e51" 00:07:59.651 ], 00:07:59.651 "product_name": "Malloc disk", 00:07:59.651 "block_size": 512, 00:07:59.651 "num_blocks": 65536, 00:07:59.651 "uuid": "11ba5447-486e-4067-996a-5290532b8e51", 00:07:59.651 "assigned_rate_limits": { 00:07:59.651 "rw_ios_per_sec": 0, 00:07:59.651 "rw_mbytes_per_sec": 0, 00:07:59.651 "r_mbytes_per_sec": 0, 00:07:59.651 "w_mbytes_per_sec": 0 00:07:59.651 }, 00:07:59.651 "claimed": true, 00:07:59.651 "claim_type": "exclusive_write", 00:07:59.651 "zoned": false, 00:07:59.651 "supported_io_types": { 00:07:59.651 "read": true, 00:07:59.651 "write": true, 00:07:59.651 "unmap": true, 00:07:59.651 "flush": true, 00:07:59.651 "reset": true, 00:07:59.651 "nvme_admin": false, 00:07:59.651 "nvme_io": false, 00:07:59.651 "nvme_io_md": false, 00:07:59.651 "write_zeroes": true, 00:07:59.651 "zcopy": true, 00:07:59.651 "get_zone_info": false, 00:07:59.651 "zone_management": false, 00:07:59.651 "zone_append": false, 00:07:59.651 "compare": false, 00:07:59.651 "compare_and_write": false, 00:07:59.651 "abort": true, 00:07:59.651 "seek_hole": false, 00:07:59.651 "seek_data": false, 00:07:59.651 "copy": true, 00:07:59.651 "nvme_iov_md": false 00:07:59.651 }, 00:07:59.651 "memory_domains": [ 00:07:59.651 { 00:07:59.651 "dma_device_id": "system", 00:07:59.651 "dma_device_type": 1 00:07:59.651 }, 00:07:59.651 { 00:07:59.651 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.651 "dma_device_type": 2 00:07:59.651 } 00:07:59.651 ], 00:07:59.651 "driver_specific": {} 00:07:59.651 } 00:07:59.651 ] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:59.651 "name": "Existed_Raid", 00:07:59.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.651 "strip_size_kb": 64, 00:07:59.651 "state": "configuring", 00:07:59.651 "raid_level": "concat", 00:07:59.651 "superblock": false, 00:07:59.651 "num_base_bdevs": 2, 00:07:59.651 "num_base_bdevs_discovered": 1, 00:07:59.651 "num_base_bdevs_operational": 2, 00:07:59.651 "base_bdevs_list": [ 00:07:59.651 { 00:07:59.651 "name": "BaseBdev1", 00:07:59.651 "uuid": "11ba5447-486e-4067-996a-5290532b8e51", 00:07:59.651 "is_configured": true, 00:07:59.651 "data_offset": 0, 00:07:59.651 "data_size": 65536 00:07:59.651 }, 00:07:59.651 { 00:07:59.651 "name": "BaseBdev2", 00:07:59.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:59.651 "is_configured": false, 00:07:59.651 "data_offset": 0, 00:07:59.651 "data_size": 0 00:07:59.651 } 00:07:59.651 ] 00:07:59.651 }' 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:59.651 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.221 [2024-12-12 05:46:07.549511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:00.221 [2024-12-12 05:46:07.549559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.221 [2024-12-12 05:46:07.561534] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:00.221 [2024-12-12 05:46:07.563328] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.221 [2024-12-12 05:46:07.563409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.221 "name": "Existed_Raid", 00:08:00.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.221 "strip_size_kb": 64, 00:08:00.221 "state": "configuring", 00:08:00.221 "raid_level": "concat", 00:08:00.221 "superblock": false, 00:08:00.221 "num_base_bdevs": 2, 00:08:00.221 "num_base_bdevs_discovered": 1, 00:08:00.221 "num_base_bdevs_operational": 2, 00:08:00.221 "base_bdevs_list": [ 00:08:00.221 { 00:08:00.221 "name": "BaseBdev1", 00:08:00.221 "uuid": "11ba5447-486e-4067-996a-5290532b8e51", 00:08:00.221 "is_configured": true, 00:08:00.221 "data_offset": 0, 00:08:00.221 "data_size": 65536 00:08:00.221 }, 00:08:00.221 { 00:08:00.221 "name": "BaseBdev2", 00:08:00.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.221 "is_configured": false, 00:08:00.221 "data_offset": 0, 00:08:00.221 "data_size": 0 00:08:00.221 } 00:08:00.221 ] 00:08:00.221 }' 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.221 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.481 05:46:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:00.481 05:46:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.481 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.741 [2024-12-12 05:46:08.042531] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:00.741 [2024-12-12 05:46:08.042594] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:00.741 [2024-12-12 05:46:08.042602] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:00.741 [2024-12-12 05:46:08.042915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:00.741 [2024-12-12 05:46:08.043128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:00.741 [2024-12-12 05:46:08.043142] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:00.741 [2024-12-12 05:46:08.043411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:00.741 BaseBdev2 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.741 [ 00:08:00.741 { 00:08:00.741 "name": "BaseBdev2", 00:08:00.741 "aliases": [ 00:08:00.741 "6c3cfbdb-4ab8-47e1-82de-33c0f487a712" 00:08:00.741 ], 00:08:00.741 "product_name": "Malloc disk", 00:08:00.741 "block_size": 512, 00:08:00.741 "num_blocks": 65536, 00:08:00.741 "uuid": "6c3cfbdb-4ab8-47e1-82de-33c0f487a712", 00:08:00.741 "assigned_rate_limits": { 00:08:00.741 "rw_ios_per_sec": 0, 00:08:00.741 "rw_mbytes_per_sec": 0, 00:08:00.741 "r_mbytes_per_sec": 0, 00:08:00.741 "w_mbytes_per_sec": 0 00:08:00.741 }, 00:08:00.741 "claimed": true, 00:08:00.741 "claim_type": "exclusive_write", 00:08:00.741 "zoned": false, 00:08:00.741 "supported_io_types": { 00:08:00.741 "read": true, 00:08:00.741 "write": true, 00:08:00.741 "unmap": true, 00:08:00.741 "flush": true, 00:08:00.741 "reset": true, 00:08:00.741 "nvme_admin": false, 00:08:00.741 "nvme_io": false, 00:08:00.741 "nvme_io_md": false, 00:08:00.741 "write_zeroes": true, 00:08:00.741 "zcopy": true, 00:08:00.741 "get_zone_info": false, 00:08:00.741 "zone_management": false, 00:08:00.741 "zone_append": false, 00:08:00.741 "compare": false, 00:08:00.741 "compare_and_write": false, 00:08:00.741 "abort": true, 00:08:00.741 "seek_hole": false, 00:08:00.741 "seek_data": false, 00:08:00.741 "copy": true, 00:08:00.741 "nvme_iov_md": false 00:08:00.741 }, 00:08:00.741 "memory_domains": [ 00:08:00.741 { 00:08:00.741 "dma_device_id": "system", 00:08:00.741 "dma_device_type": 1 00:08:00.741 }, 00:08:00.741 { 00:08:00.741 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.741 "dma_device_type": 2 00:08:00.741 } 00:08:00.741 ], 00:08:00.741 "driver_specific": {} 00:08:00.741 } 00:08:00.741 ] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:00.741 "name": "Existed_Raid", 00:08:00.741 "uuid": "27d0efde-608e-4fa2-8f5a-24f24314ec49", 00:08:00.741 "strip_size_kb": 64, 00:08:00.741 "state": "online", 00:08:00.741 "raid_level": "concat", 00:08:00.741 "superblock": false, 00:08:00.741 "num_base_bdevs": 2, 00:08:00.741 "num_base_bdevs_discovered": 2, 00:08:00.741 "num_base_bdevs_operational": 2, 00:08:00.741 "base_bdevs_list": [ 00:08:00.741 { 00:08:00.741 "name": "BaseBdev1", 00:08:00.741 "uuid": "11ba5447-486e-4067-996a-5290532b8e51", 00:08:00.741 "is_configured": true, 00:08:00.741 "data_offset": 0, 00:08:00.741 "data_size": 65536 00:08:00.741 }, 00:08:00.741 { 00:08:00.741 "name": "BaseBdev2", 00:08:00.741 "uuid": "6c3cfbdb-4ab8-47e1-82de-33c0f487a712", 00:08:00.741 "is_configured": true, 00:08:00.741 "data_offset": 0, 00:08:00.741 "data_size": 65536 00:08:00.741 } 00:08:00.741 ] 00:08:00.741 }' 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:00.741 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.001 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.261 [2024-12-12 05:46:08.529989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.261 "name": "Existed_Raid", 00:08:01.261 "aliases": [ 00:08:01.261 "27d0efde-608e-4fa2-8f5a-24f24314ec49" 00:08:01.261 ], 00:08:01.261 "product_name": "Raid Volume", 00:08:01.261 "block_size": 512, 00:08:01.261 "num_blocks": 131072, 00:08:01.261 "uuid": "27d0efde-608e-4fa2-8f5a-24f24314ec49", 00:08:01.261 "assigned_rate_limits": { 00:08:01.261 "rw_ios_per_sec": 0, 00:08:01.261 "rw_mbytes_per_sec": 0, 00:08:01.261 "r_mbytes_per_sec": 0, 00:08:01.261 "w_mbytes_per_sec": 0 00:08:01.261 }, 00:08:01.261 "claimed": false, 00:08:01.261 "zoned": false, 00:08:01.261 "supported_io_types": { 00:08:01.261 "read": true, 00:08:01.261 "write": true, 00:08:01.261 "unmap": true, 00:08:01.261 "flush": true, 00:08:01.261 "reset": true, 00:08:01.261 "nvme_admin": false, 00:08:01.261 "nvme_io": false, 00:08:01.261 "nvme_io_md": false, 00:08:01.261 "write_zeroes": true, 00:08:01.261 "zcopy": false, 00:08:01.261 "get_zone_info": false, 00:08:01.261 "zone_management": false, 00:08:01.261 "zone_append": false, 00:08:01.261 "compare": false, 00:08:01.261 "compare_and_write": false, 00:08:01.261 "abort": false, 00:08:01.261 "seek_hole": false, 00:08:01.261 "seek_data": false, 00:08:01.261 "copy": false, 00:08:01.261 "nvme_iov_md": false 00:08:01.261 }, 00:08:01.261 "memory_domains": [ 00:08:01.261 { 00:08:01.261 "dma_device_id": "system", 00:08:01.261 "dma_device_type": 1 00:08:01.261 }, 00:08:01.261 { 00:08:01.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.261 "dma_device_type": 2 00:08:01.261 }, 00:08:01.261 { 00:08:01.261 "dma_device_id": "system", 00:08:01.261 "dma_device_type": 1 00:08:01.261 }, 00:08:01.261 { 00:08:01.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.261 "dma_device_type": 2 00:08:01.261 } 00:08:01.261 ], 00:08:01.261 "driver_specific": { 00:08:01.261 "raid": { 00:08:01.261 "uuid": "27d0efde-608e-4fa2-8f5a-24f24314ec49", 00:08:01.261 "strip_size_kb": 64, 00:08:01.261 "state": "online", 00:08:01.261 "raid_level": "concat", 00:08:01.261 "superblock": false, 00:08:01.261 "num_base_bdevs": 2, 00:08:01.261 "num_base_bdevs_discovered": 2, 00:08:01.261 "num_base_bdevs_operational": 2, 00:08:01.261 "base_bdevs_list": [ 00:08:01.261 { 00:08:01.261 "name": "BaseBdev1", 00:08:01.261 "uuid": "11ba5447-486e-4067-996a-5290532b8e51", 00:08:01.261 "is_configured": true, 00:08:01.261 "data_offset": 0, 00:08:01.261 "data_size": 65536 00:08:01.261 }, 00:08:01.261 { 00:08:01.261 "name": "BaseBdev2", 00:08:01.261 "uuid": "6c3cfbdb-4ab8-47e1-82de-33c0f487a712", 00:08:01.261 "is_configured": true, 00:08:01.261 "data_offset": 0, 00:08:01.261 "data_size": 65536 00:08:01.261 } 00:08:01.261 ] 00:08:01.261 } 00:08:01.261 } 00:08:01.261 }' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:01.261 BaseBdev2' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:01.261 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.262 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.262 [2024-12-12 05:46:08.749403] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:01.262 [2024-12-12 05:46:08.749437] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:01.262 [2024-12-12 05:46:08.749486] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.522 "name": "Existed_Raid", 00:08:01.522 "uuid": "27d0efde-608e-4fa2-8f5a-24f24314ec49", 00:08:01.522 "strip_size_kb": 64, 00:08:01.522 "state": "offline", 00:08:01.522 "raid_level": "concat", 00:08:01.522 "superblock": false, 00:08:01.522 "num_base_bdevs": 2, 00:08:01.522 "num_base_bdevs_discovered": 1, 00:08:01.522 "num_base_bdevs_operational": 1, 00:08:01.522 "base_bdevs_list": [ 00:08:01.522 { 00:08:01.522 "name": null, 00:08:01.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:01.522 "is_configured": false, 00:08:01.522 "data_offset": 0, 00:08:01.522 "data_size": 65536 00:08:01.522 }, 00:08:01.522 { 00:08:01.522 "name": "BaseBdev2", 00:08:01.522 "uuid": "6c3cfbdb-4ab8-47e1-82de-33c0f487a712", 00:08:01.522 "is_configured": true, 00:08:01.522 "data_offset": 0, 00:08:01.522 "data_size": 65536 00:08:01.522 } 00:08:01.522 ] 00:08:01.522 }' 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.522 05:46:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.782 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.042 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:02.042 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:02.042 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.043 [2024-12-12 05:46:09.330210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:02.043 [2024-12-12 05:46:09.330320] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62694 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62694 ']' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62694 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62694 00:08:02.043 killing process with pid 62694 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62694' 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62694 00:08:02.043 [2024-12-12 05:46:09.520571] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:02.043 05:46:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62694 00:08:02.043 [2024-12-12 05:46:09.536221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:03.449 ************************************ 00:08:03.449 END TEST raid_state_function_test 00:08:03.449 ************************************ 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:03.449 00:08:03.449 real 0m4.981s 00:08:03.449 user 0m7.231s 00:08:03.449 sys 0m0.798s 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.449 05:46:10 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:03.449 05:46:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:03.449 05:46:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.449 05:46:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:03.449 ************************************ 00:08:03.449 START TEST raid_state_function_test_sb 00:08:03.449 ************************************ 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:03.449 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62947 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:03.450 Process raid pid: 62947 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62947' 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62947 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62947 ']' 00:08:03.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.450 05:46:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.450 [2024-12-12 05:46:10.763677] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:03.450 [2024-12-12 05:46:10.763794] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.450 [2024-12-12 05:46:10.935593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.710 [2024-12-12 05:46:11.040889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.970 [2024-12-12 05:46:11.233471] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:03.970 [2024-12-12 05:46:11.233520] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.230 [2024-12-12 05:46:11.585454] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.230 [2024-12-12 05:46:11.585534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.230 [2024-12-12 05:46:11.585545] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.230 [2024-12-12 05:46:11.585555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.230 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.231 "name": "Existed_Raid", 00:08:04.231 "uuid": "1d6c04de-f314-4576-a9cf-f231b86d76b3", 00:08:04.231 "strip_size_kb": 64, 00:08:04.231 "state": "configuring", 00:08:04.231 "raid_level": "concat", 00:08:04.231 "superblock": true, 00:08:04.231 "num_base_bdevs": 2, 00:08:04.231 "num_base_bdevs_discovered": 0, 00:08:04.231 "num_base_bdevs_operational": 2, 00:08:04.231 "base_bdevs_list": [ 00:08:04.231 { 00:08:04.231 "name": "BaseBdev1", 00:08:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.231 "is_configured": false, 00:08:04.231 "data_offset": 0, 00:08:04.231 "data_size": 0 00:08:04.231 }, 00:08:04.231 { 00:08:04.231 "name": "BaseBdev2", 00:08:04.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.231 "is_configured": false, 00:08:04.231 "data_offset": 0, 00:08:04.231 "data_size": 0 00:08:04.231 } 00:08:04.231 ] 00:08:04.231 }' 00:08:04.231 05:46:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.231 05:46:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 [2024-12-12 05:46:12.036625] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:04.800 [2024-12-12 05:46:12.036704] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 [2024-12-12 05:46:12.048606] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:04.800 [2024-12-12 05:46:12.048677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:04.800 [2024-12-12 05:46:12.048703] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:04.800 [2024-12-12 05:46:12.048726] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 [2024-12-12 05:46:12.094290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:04.800 BaseBdev1 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.800 [ 00:08:04.800 { 00:08:04.800 "name": "BaseBdev1", 00:08:04.800 "aliases": [ 00:08:04.800 "acd475e2-0f05-4544-85ac-c344a5c1588f" 00:08:04.800 ], 00:08:04.800 "product_name": "Malloc disk", 00:08:04.800 "block_size": 512, 00:08:04.800 "num_blocks": 65536, 00:08:04.800 "uuid": "acd475e2-0f05-4544-85ac-c344a5c1588f", 00:08:04.800 "assigned_rate_limits": { 00:08:04.800 "rw_ios_per_sec": 0, 00:08:04.800 "rw_mbytes_per_sec": 0, 00:08:04.800 "r_mbytes_per_sec": 0, 00:08:04.800 "w_mbytes_per_sec": 0 00:08:04.800 }, 00:08:04.800 "claimed": true, 00:08:04.800 "claim_type": "exclusive_write", 00:08:04.800 "zoned": false, 00:08:04.800 "supported_io_types": { 00:08:04.800 "read": true, 00:08:04.800 "write": true, 00:08:04.800 "unmap": true, 00:08:04.800 "flush": true, 00:08:04.800 "reset": true, 00:08:04.800 "nvme_admin": false, 00:08:04.800 "nvme_io": false, 00:08:04.800 "nvme_io_md": false, 00:08:04.800 "write_zeroes": true, 00:08:04.800 "zcopy": true, 00:08:04.800 "get_zone_info": false, 00:08:04.800 "zone_management": false, 00:08:04.800 "zone_append": false, 00:08:04.800 "compare": false, 00:08:04.800 "compare_and_write": false, 00:08:04.800 "abort": true, 00:08:04.800 "seek_hole": false, 00:08:04.800 "seek_data": false, 00:08:04.800 "copy": true, 00:08:04.800 "nvme_iov_md": false 00:08:04.800 }, 00:08:04.800 "memory_domains": [ 00:08:04.800 { 00:08:04.800 "dma_device_id": "system", 00:08:04.800 "dma_device_type": 1 00:08:04.800 }, 00:08:04.800 { 00:08:04.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.800 "dma_device_type": 2 00:08:04.800 } 00:08:04.800 ], 00:08:04.800 "driver_specific": {} 00:08:04.800 } 00:08:04.800 ] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:04.800 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:04.801 "name": "Existed_Raid", 00:08:04.801 "uuid": "74b00ced-31af-4a81-b388-6045a979c419", 00:08:04.801 "strip_size_kb": 64, 00:08:04.801 "state": "configuring", 00:08:04.801 "raid_level": "concat", 00:08:04.801 "superblock": true, 00:08:04.801 "num_base_bdevs": 2, 00:08:04.801 "num_base_bdevs_discovered": 1, 00:08:04.801 "num_base_bdevs_operational": 2, 00:08:04.801 "base_bdevs_list": [ 00:08:04.801 { 00:08:04.801 "name": "BaseBdev1", 00:08:04.801 "uuid": "acd475e2-0f05-4544-85ac-c344a5c1588f", 00:08:04.801 "is_configured": true, 00:08:04.801 "data_offset": 2048, 00:08:04.801 "data_size": 63488 00:08:04.801 }, 00:08:04.801 { 00:08:04.801 "name": "BaseBdev2", 00:08:04.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:04.801 "is_configured": false, 00:08:04.801 "data_offset": 0, 00:08:04.801 "data_size": 0 00:08:04.801 } 00:08:04.801 ] 00:08:04.801 }' 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:04.801 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.061 [2024-12-12 05:46:12.537614] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:05.061 [2024-12-12 05:46:12.537724] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.061 [2024-12-12 05:46:12.549640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:05.061 [2024-12-12 05:46:12.551456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:05.061 [2024-12-12 05:46:12.551510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.061 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.321 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.321 "name": "Existed_Raid", 00:08:05.321 "uuid": "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4", 00:08:05.321 "strip_size_kb": 64, 00:08:05.321 "state": "configuring", 00:08:05.321 "raid_level": "concat", 00:08:05.321 "superblock": true, 00:08:05.321 "num_base_bdevs": 2, 00:08:05.321 "num_base_bdevs_discovered": 1, 00:08:05.321 "num_base_bdevs_operational": 2, 00:08:05.321 "base_bdevs_list": [ 00:08:05.321 { 00:08:05.321 "name": "BaseBdev1", 00:08:05.321 "uuid": "acd475e2-0f05-4544-85ac-c344a5c1588f", 00:08:05.321 "is_configured": true, 00:08:05.321 "data_offset": 2048, 00:08:05.321 "data_size": 63488 00:08:05.321 }, 00:08:05.321 { 00:08:05.321 "name": "BaseBdev2", 00:08:05.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:05.321 "is_configured": false, 00:08:05.321 "data_offset": 0, 00:08:05.321 "data_size": 0 00:08:05.321 } 00:08:05.321 ] 00:08:05.321 }' 00:08:05.321 05:46:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.321 05:46:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 [2024-12-12 05:46:13.053934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:05.582 [2024-12-12 05:46:13.054331] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:05.582 [2024-12-12 05:46:13.054385] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:05.582 [2024-12-12 05:46:13.054700] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:05.582 [2024-12-12 05:46:13.054945] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:05.582 BaseBdev2 00:08:05.582 [2024-12-12 05:46:13.055003] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:05.582 [2024-12-12 05:46:13.055214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.582 [ 00:08:05.582 { 00:08:05.582 "name": "BaseBdev2", 00:08:05.582 "aliases": [ 00:08:05.582 "c5dbd2f4-1fbd-4675-b6ab-21e7de3259e3" 00:08:05.582 ], 00:08:05.582 "product_name": "Malloc disk", 00:08:05.582 "block_size": 512, 00:08:05.582 "num_blocks": 65536, 00:08:05.582 "uuid": "c5dbd2f4-1fbd-4675-b6ab-21e7de3259e3", 00:08:05.582 "assigned_rate_limits": { 00:08:05.582 "rw_ios_per_sec": 0, 00:08:05.582 "rw_mbytes_per_sec": 0, 00:08:05.582 "r_mbytes_per_sec": 0, 00:08:05.582 "w_mbytes_per_sec": 0 00:08:05.582 }, 00:08:05.582 "claimed": true, 00:08:05.582 "claim_type": "exclusive_write", 00:08:05.582 "zoned": false, 00:08:05.582 "supported_io_types": { 00:08:05.582 "read": true, 00:08:05.582 "write": true, 00:08:05.582 "unmap": true, 00:08:05.582 "flush": true, 00:08:05.582 "reset": true, 00:08:05.582 "nvme_admin": false, 00:08:05.582 "nvme_io": false, 00:08:05.582 "nvme_io_md": false, 00:08:05.582 "write_zeroes": true, 00:08:05.582 "zcopy": true, 00:08:05.582 "get_zone_info": false, 00:08:05.582 "zone_management": false, 00:08:05.582 "zone_append": false, 00:08:05.582 "compare": false, 00:08:05.582 "compare_and_write": false, 00:08:05.582 "abort": true, 00:08:05.582 "seek_hole": false, 00:08:05.582 "seek_data": false, 00:08:05.582 "copy": true, 00:08:05.582 "nvme_iov_md": false 00:08:05.582 }, 00:08:05.582 "memory_domains": [ 00:08:05.582 { 00:08:05.582 "dma_device_id": "system", 00:08:05.582 "dma_device_type": 1 00:08:05.582 }, 00:08:05.582 { 00:08:05.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.582 "dma_device_type": 2 00:08:05.582 } 00:08:05.582 ], 00:08:05.582 "driver_specific": {} 00:08:05.582 } 00:08:05.582 ] 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:05.582 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:05.842 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:05.842 "name": "Existed_Raid", 00:08:05.842 "uuid": "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4", 00:08:05.842 "strip_size_kb": 64, 00:08:05.842 "state": "online", 00:08:05.842 "raid_level": "concat", 00:08:05.842 "superblock": true, 00:08:05.842 "num_base_bdevs": 2, 00:08:05.842 "num_base_bdevs_discovered": 2, 00:08:05.842 "num_base_bdevs_operational": 2, 00:08:05.842 "base_bdevs_list": [ 00:08:05.842 { 00:08:05.842 "name": "BaseBdev1", 00:08:05.842 "uuid": "acd475e2-0f05-4544-85ac-c344a5c1588f", 00:08:05.842 "is_configured": true, 00:08:05.842 "data_offset": 2048, 00:08:05.842 "data_size": 63488 00:08:05.842 }, 00:08:05.843 { 00:08:05.843 "name": "BaseBdev2", 00:08:05.843 "uuid": "c5dbd2f4-1fbd-4675-b6ab-21e7de3259e3", 00:08:05.843 "is_configured": true, 00:08:05.843 "data_offset": 2048, 00:08:05.843 "data_size": 63488 00:08:05.843 } 00:08:05.843 ] 00:08:05.843 }' 00:08:05.843 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:05.843 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:06.102 [2024-12-12 05:46:13.537429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.102 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:06.102 "name": "Existed_Raid", 00:08:06.102 "aliases": [ 00:08:06.102 "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4" 00:08:06.102 ], 00:08:06.102 "product_name": "Raid Volume", 00:08:06.102 "block_size": 512, 00:08:06.102 "num_blocks": 126976, 00:08:06.102 "uuid": "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4", 00:08:06.102 "assigned_rate_limits": { 00:08:06.102 "rw_ios_per_sec": 0, 00:08:06.102 "rw_mbytes_per_sec": 0, 00:08:06.102 "r_mbytes_per_sec": 0, 00:08:06.102 "w_mbytes_per_sec": 0 00:08:06.102 }, 00:08:06.102 "claimed": false, 00:08:06.102 "zoned": false, 00:08:06.102 "supported_io_types": { 00:08:06.102 "read": true, 00:08:06.102 "write": true, 00:08:06.102 "unmap": true, 00:08:06.102 "flush": true, 00:08:06.102 "reset": true, 00:08:06.102 "nvme_admin": false, 00:08:06.102 "nvme_io": false, 00:08:06.102 "nvme_io_md": false, 00:08:06.102 "write_zeroes": true, 00:08:06.102 "zcopy": false, 00:08:06.102 "get_zone_info": false, 00:08:06.102 "zone_management": false, 00:08:06.102 "zone_append": false, 00:08:06.102 "compare": false, 00:08:06.102 "compare_and_write": false, 00:08:06.102 "abort": false, 00:08:06.102 "seek_hole": false, 00:08:06.102 "seek_data": false, 00:08:06.102 "copy": false, 00:08:06.102 "nvme_iov_md": false 00:08:06.102 }, 00:08:06.102 "memory_domains": [ 00:08:06.102 { 00:08:06.102 "dma_device_id": "system", 00:08:06.102 "dma_device_type": 1 00:08:06.102 }, 00:08:06.102 { 00:08:06.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.102 "dma_device_type": 2 00:08:06.102 }, 00:08:06.102 { 00:08:06.102 "dma_device_id": "system", 00:08:06.102 "dma_device_type": 1 00:08:06.102 }, 00:08:06.102 { 00:08:06.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.102 "dma_device_type": 2 00:08:06.102 } 00:08:06.102 ], 00:08:06.102 "driver_specific": { 00:08:06.102 "raid": { 00:08:06.102 "uuid": "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4", 00:08:06.102 "strip_size_kb": 64, 00:08:06.102 "state": "online", 00:08:06.102 "raid_level": "concat", 00:08:06.102 "superblock": true, 00:08:06.102 "num_base_bdevs": 2, 00:08:06.102 "num_base_bdevs_discovered": 2, 00:08:06.102 "num_base_bdevs_operational": 2, 00:08:06.102 "base_bdevs_list": [ 00:08:06.102 { 00:08:06.102 "name": "BaseBdev1", 00:08:06.102 "uuid": "acd475e2-0f05-4544-85ac-c344a5c1588f", 00:08:06.102 "is_configured": true, 00:08:06.102 "data_offset": 2048, 00:08:06.102 "data_size": 63488 00:08:06.102 }, 00:08:06.102 { 00:08:06.102 "name": "BaseBdev2", 00:08:06.102 "uuid": "c5dbd2f4-1fbd-4675-b6ab-21e7de3259e3", 00:08:06.102 "is_configured": true, 00:08:06.102 "data_offset": 2048, 00:08:06.102 "data_size": 63488 00:08:06.103 } 00:08:06.103 ] 00:08:06.103 } 00:08:06.103 } 00:08:06.103 }' 00:08:06.103 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:06.363 BaseBdev2' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.363 [2024-12-12 05:46:13.756818] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.363 [2024-12-12 05:46:13.756849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.363 [2024-12-12 05:46:13.756898] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.363 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.623 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.623 "name": "Existed_Raid", 00:08:06.623 "uuid": "1538b2b6-8979-419d-bf31-ec4dd1e2c6d4", 00:08:06.623 "strip_size_kb": 64, 00:08:06.623 "state": "offline", 00:08:06.623 "raid_level": "concat", 00:08:06.623 "superblock": true, 00:08:06.623 "num_base_bdevs": 2, 00:08:06.623 "num_base_bdevs_discovered": 1, 00:08:06.623 "num_base_bdevs_operational": 1, 00:08:06.623 "base_bdevs_list": [ 00:08:06.623 { 00:08:06.623 "name": null, 00:08:06.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.623 "is_configured": false, 00:08:06.623 "data_offset": 0, 00:08:06.623 "data_size": 63488 00:08:06.623 }, 00:08:06.623 { 00:08:06.623 "name": "BaseBdev2", 00:08:06.623 "uuid": "c5dbd2f4-1fbd-4675-b6ab-21e7de3259e3", 00:08:06.623 "is_configured": true, 00:08:06.623 "data_offset": 2048, 00:08:06.623 "data_size": 63488 00:08:06.623 } 00:08:06.623 ] 00:08:06.623 }' 00:08:06.623 05:46:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.623 05:46:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:06.883 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:06.883 [2024-12-12 05:46:14.344884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:06.883 [2024-12-12 05:46:14.344985] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62947 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62947 ']' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62947 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62947 00:08:07.144 killing process with pid 62947 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62947' 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62947 00:08:07.144 [2024-12-12 05:46:14.534091] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.144 05:46:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62947 00:08:07.144 [2024-12-12 05:46:14.551164] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:08.529 05:46:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:08.529 00:08:08.529 real 0m4.953s 00:08:08.529 user 0m7.173s 00:08:08.529 sys 0m0.790s 00:08:08.529 ************************************ 00:08:08.529 END TEST raid_state_function_test_sb 00:08:08.529 ************************************ 00:08:08.529 05:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.529 05:46:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:08.529 05:46:15 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:08.529 05:46:15 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:08.529 05:46:15 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.529 05:46:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.529 ************************************ 00:08:08.529 START TEST raid_superblock_test 00:08:08.529 ************************************ 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63194 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63194 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63194 ']' 00:08:08.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.529 05:46:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.529 [2024-12-12 05:46:15.781967] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:08.529 [2024-12-12 05:46:15.782136] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63194 ] 00:08:08.529 [2024-12-12 05:46:15.937984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.529 [2024-12-12 05:46:16.047828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.789 [2024-12-12 05:46:16.229836] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.789 [2024-12-12 05:46:16.229967] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.360 malloc1 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.360 [2024-12-12 05:46:16.665479] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.360 [2024-12-12 05:46:16.665564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.360 [2024-12-12 05:46:16.665588] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:09.360 [2024-12-12 05:46:16.665596] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.360 [2024-12-12 05:46:16.667645] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.360 [2024-12-12 05:46:16.667680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.360 pt1 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.360 malloc2 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.360 [2024-12-12 05:46:16.719634] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:09.360 [2024-12-12 05:46:16.719753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.360 [2024-12-12 05:46:16.719794] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:09.360 [2024-12-12 05:46:16.719828] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.360 [2024-12-12 05:46:16.721895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.360 [2024-12-12 05:46:16.721957] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:09.360 pt2 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.360 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.360 [2024-12-12 05:46:16.731668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:09.360 [2024-12-12 05:46:16.733422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:09.361 [2024-12-12 05:46:16.733647] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:09.361 [2024-12-12 05:46:16.733696] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.361 [2024-12-12 05:46:16.733967] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:09.361 [2024-12-12 05:46:16.734152] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:09.361 [2024-12-12 05:46:16.734195] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:09.361 [2024-12-12 05:46:16.734409] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:09.361 "name": "raid_bdev1", 00:08:09.361 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:09.361 "strip_size_kb": 64, 00:08:09.361 "state": "online", 00:08:09.361 "raid_level": "concat", 00:08:09.361 "superblock": true, 00:08:09.361 "num_base_bdevs": 2, 00:08:09.361 "num_base_bdevs_discovered": 2, 00:08:09.361 "num_base_bdevs_operational": 2, 00:08:09.361 "base_bdevs_list": [ 00:08:09.361 { 00:08:09.361 "name": "pt1", 00:08:09.361 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.361 "is_configured": true, 00:08:09.361 "data_offset": 2048, 00:08:09.361 "data_size": 63488 00:08:09.361 }, 00:08:09.361 { 00:08:09.361 "name": "pt2", 00:08:09.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.361 "is_configured": true, 00:08:09.361 "data_offset": 2048, 00:08:09.361 "data_size": 63488 00:08:09.361 } 00:08:09.361 ] 00:08:09.361 }' 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:09.361 05:46:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.620 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:09.620 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:09.620 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:09.620 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:09.620 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:09.621 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:09.621 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:09.621 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.621 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.621 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.880 [2024-12-12 05:46:17.147194] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.880 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.880 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:09.880 "name": "raid_bdev1", 00:08:09.880 "aliases": [ 00:08:09.880 "6753671d-22e4-457d-8969-aeff09529d2a" 00:08:09.880 ], 00:08:09.880 "product_name": "Raid Volume", 00:08:09.880 "block_size": 512, 00:08:09.880 "num_blocks": 126976, 00:08:09.880 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:09.880 "assigned_rate_limits": { 00:08:09.880 "rw_ios_per_sec": 0, 00:08:09.880 "rw_mbytes_per_sec": 0, 00:08:09.880 "r_mbytes_per_sec": 0, 00:08:09.881 "w_mbytes_per_sec": 0 00:08:09.881 }, 00:08:09.881 "claimed": false, 00:08:09.881 "zoned": false, 00:08:09.881 "supported_io_types": { 00:08:09.881 "read": true, 00:08:09.881 "write": true, 00:08:09.881 "unmap": true, 00:08:09.881 "flush": true, 00:08:09.881 "reset": true, 00:08:09.881 "nvme_admin": false, 00:08:09.881 "nvme_io": false, 00:08:09.881 "nvme_io_md": false, 00:08:09.881 "write_zeroes": true, 00:08:09.881 "zcopy": false, 00:08:09.881 "get_zone_info": false, 00:08:09.881 "zone_management": false, 00:08:09.881 "zone_append": false, 00:08:09.881 "compare": false, 00:08:09.881 "compare_and_write": false, 00:08:09.881 "abort": false, 00:08:09.881 "seek_hole": false, 00:08:09.881 "seek_data": false, 00:08:09.881 "copy": false, 00:08:09.881 "nvme_iov_md": false 00:08:09.881 }, 00:08:09.881 "memory_domains": [ 00:08:09.881 { 00:08:09.881 "dma_device_id": "system", 00:08:09.881 "dma_device_type": 1 00:08:09.881 }, 00:08:09.881 { 00:08:09.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.881 "dma_device_type": 2 00:08:09.881 }, 00:08:09.881 { 00:08:09.881 "dma_device_id": "system", 00:08:09.881 "dma_device_type": 1 00:08:09.881 }, 00:08:09.881 { 00:08:09.881 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:09.881 "dma_device_type": 2 00:08:09.881 } 00:08:09.881 ], 00:08:09.881 "driver_specific": { 00:08:09.881 "raid": { 00:08:09.881 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:09.881 "strip_size_kb": 64, 00:08:09.881 "state": "online", 00:08:09.881 "raid_level": "concat", 00:08:09.881 "superblock": true, 00:08:09.881 "num_base_bdevs": 2, 00:08:09.881 "num_base_bdevs_discovered": 2, 00:08:09.881 "num_base_bdevs_operational": 2, 00:08:09.881 "base_bdevs_list": [ 00:08:09.881 { 00:08:09.881 "name": "pt1", 00:08:09.881 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:09.881 "is_configured": true, 00:08:09.881 "data_offset": 2048, 00:08:09.881 "data_size": 63488 00:08:09.881 }, 00:08:09.881 { 00:08:09.881 "name": "pt2", 00:08:09.881 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:09.881 "is_configured": true, 00:08:09.881 "data_offset": 2048, 00:08:09.881 "data_size": 63488 00:08:09.881 } 00:08:09.881 ] 00:08:09.881 } 00:08:09.881 } 00:08:09.881 }' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:09.881 pt2' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:09.881 [2024-12-12 05:46:17.386881] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:09.881 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=6753671d-22e4-457d-8969-aeff09529d2a 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 6753671d-22e4-457d-8969-aeff09529d2a ']' 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [2024-12-12 05:46:17.410507] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.142 [2024-12-12 05:46:17.410542] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:10.142 [2024-12-12 05:46:17.410627] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:10.142 [2024-12-12 05:46:17.410677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:10.142 [2024-12-12 05:46:17.410691] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.142 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.142 [2024-12-12 05:46:17.546326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:10.142 [2024-12-12 05:46:17.548246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:10.142 [2024-12-12 05:46:17.548312] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:10.142 [2024-12-12 05:46:17.548362] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:10.142 [2024-12-12 05:46:17.548376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:10.143 [2024-12-12 05:46:17.548386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:10.143 request: 00:08:10.143 { 00:08:10.143 "name": "raid_bdev1", 00:08:10.143 "raid_level": "concat", 00:08:10.143 "base_bdevs": [ 00:08:10.143 "malloc1", 00:08:10.143 "malloc2" 00:08:10.143 ], 00:08:10.143 "strip_size_kb": 64, 00:08:10.143 "superblock": false, 00:08:10.143 "method": "bdev_raid_create", 00:08:10.143 "req_id": 1 00:08:10.143 } 00:08:10.143 Got JSON-RPC error response 00:08:10.143 response: 00:08:10.143 { 00:08:10.143 "code": -17, 00:08:10.143 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:10.143 } 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.143 [2024-12-12 05:46:17.610175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:10.143 [2024-12-12 05:46:17.610273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.143 [2024-12-12 05:46:17.610315] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:10.143 [2024-12-12 05:46:17.610349] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.143 [2024-12-12 05:46:17.612633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.143 [2024-12-12 05:46:17.612698] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:10.143 [2024-12-12 05:46:17.612797] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:10.143 [2024-12-12 05:46:17.612896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.143 pt1 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.143 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.403 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.403 "name": "raid_bdev1", 00:08:10.403 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:10.403 "strip_size_kb": 64, 00:08:10.403 "state": "configuring", 00:08:10.403 "raid_level": "concat", 00:08:10.403 "superblock": true, 00:08:10.403 "num_base_bdevs": 2, 00:08:10.403 "num_base_bdevs_discovered": 1, 00:08:10.403 "num_base_bdevs_operational": 2, 00:08:10.403 "base_bdevs_list": [ 00:08:10.403 { 00:08:10.403 "name": "pt1", 00:08:10.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.403 "is_configured": true, 00:08:10.403 "data_offset": 2048, 00:08:10.403 "data_size": 63488 00:08:10.403 }, 00:08:10.403 { 00:08:10.403 "name": null, 00:08:10.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.403 "is_configured": false, 00:08:10.403 "data_offset": 2048, 00:08:10.403 "data_size": 63488 00:08:10.403 } 00:08:10.403 ] 00:08:10.403 }' 00:08:10.403 05:46:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.403 05:46:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.664 [2024-12-12 05:46:18.089398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.664 [2024-12-12 05:46:18.089476] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.664 [2024-12-12 05:46:18.089497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:10.664 [2024-12-12 05:46:18.089526] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.664 [2024-12-12 05:46:18.090019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.664 [2024-12-12 05:46:18.090050] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.664 [2024-12-12 05:46:18.090134] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:10.664 [2024-12-12 05:46:18.090163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.664 [2024-12-12 05:46:18.090288] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:10.664 [2024-12-12 05:46:18.090299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.664 [2024-12-12 05:46:18.090568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:10.664 [2024-12-12 05:46:18.090712] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:10.664 [2024-12-12 05:46:18.090721] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:10.664 [2024-12-12 05:46:18.090868] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.664 pt2 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:10.664 "name": "raid_bdev1", 00:08:10.664 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:10.664 "strip_size_kb": 64, 00:08:10.664 "state": "online", 00:08:10.664 "raid_level": "concat", 00:08:10.664 "superblock": true, 00:08:10.664 "num_base_bdevs": 2, 00:08:10.664 "num_base_bdevs_discovered": 2, 00:08:10.664 "num_base_bdevs_operational": 2, 00:08:10.664 "base_bdevs_list": [ 00:08:10.664 { 00:08:10.664 "name": "pt1", 00:08:10.664 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.664 "is_configured": true, 00:08:10.664 "data_offset": 2048, 00:08:10.664 "data_size": 63488 00:08:10.664 }, 00:08:10.664 { 00:08:10.664 "name": "pt2", 00:08:10.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.664 "is_configured": true, 00:08:10.664 "data_offset": 2048, 00:08:10.664 "data_size": 63488 00:08:10.664 } 00:08:10.664 ] 00:08:10.664 }' 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:10.664 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.245 [2024-12-12 05:46:18.528872] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:11.245 "name": "raid_bdev1", 00:08:11.245 "aliases": [ 00:08:11.245 "6753671d-22e4-457d-8969-aeff09529d2a" 00:08:11.245 ], 00:08:11.245 "product_name": "Raid Volume", 00:08:11.245 "block_size": 512, 00:08:11.245 "num_blocks": 126976, 00:08:11.245 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:11.245 "assigned_rate_limits": { 00:08:11.245 "rw_ios_per_sec": 0, 00:08:11.245 "rw_mbytes_per_sec": 0, 00:08:11.245 "r_mbytes_per_sec": 0, 00:08:11.245 "w_mbytes_per_sec": 0 00:08:11.245 }, 00:08:11.245 "claimed": false, 00:08:11.245 "zoned": false, 00:08:11.245 "supported_io_types": { 00:08:11.245 "read": true, 00:08:11.245 "write": true, 00:08:11.245 "unmap": true, 00:08:11.245 "flush": true, 00:08:11.245 "reset": true, 00:08:11.245 "nvme_admin": false, 00:08:11.245 "nvme_io": false, 00:08:11.245 "nvme_io_md": false, 00:08:11.245 "write_zeroes": true, 00:08:11.245 "zcopy": false, 00:08:11.245 "get_zone_info": false, 00:08:11.245 "zone_management": false, 00:08:11.245 "zone_append": false, 00:08:11.245 "compare": false, 00:08:11.245 "compare_and_write": false, 00:08:11.245 "abort": false, 00:08:11.245 "seek_hole": false, 00:08:11.245 "seek_data": false, 00:08:11.245 "copy": false, 00:08:11.245 "nvme_iov_md": false 00:08:11.245 }, 00:08:11.245 "memory_domains": [ 00:08:11.245 { 00:08:11.245 "dma_device_id": "system", 00:08:11.245 "dma_device_type": 1 00:08:11.245 }, 00:08:11.245 { 00:08:11.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.245 "dma_device_type": 2 00:08:11.245 }, 00:08:11.245 { 00:08:11.245 "dma_device_id": "system", 00:08:11.245 "dma_device_type": 1 00:08:11.245 }, 00:08:11.245 { 00:08:11.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.245 "dma_device_type": 2 00:08:11.245 } 00:08:11.245 ], 00:08:11.245 "driver_specific": { 00:08:11.245 "raid": { 00:08:11.245 "uuid": "6753671d-22e4-457d-8969-aeff09529d2a", 00:08:11.245 "strip_size_kb": 64, 00:08:11.245 "state": "online", 00:08:11.245 "raid_level": "concat", 00:08:11.245 "superblock": true, 00:08:11.245 "num_base_bdevs": 2, 00:08:11.245 "num_base_bdevs_discovered": 2, 00:08:11.245 "num_base_bdevs_operational": 2, 00:08:11.245 "base_bdevs_list": [ 00:08:11.245 { 00:08:11.245 "name": "pt1", 00:08:11.245 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.245 "is_configured": true, 00:08:11.245 "data_offset": 2048, 00:08:11.245 "data_size": 63488 00:08:11.245 }, 00:08:11.245 { 00:08:11.245 "name": "pt2", 00:08:11.245 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.245 "is_configured": true, 00:08:11.245 "data_offset": 2048, 00:08:11.245 "data_size": 63488 00:08:11.245 } 00:08:11.245 ] 00:08:11.245 } 00:08:11.245 } 00:08:11.245 }' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:11.245 pt2' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:11.245 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.246 [2024-12-12 05:46:18.744432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.246 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 6753671d-22e4-457d-8969-aeff09529d2a '!=' 6753671d-22e4-457d-8969-aeff09529d2a ']' 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63194 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63194 ']' 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63194 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63194 00:08:11.505 killing process with pid 63194 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63194' 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63194 00:08:11.505 [2024-12-12 05:46:18.814969] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:11.505 [2024-12-12 05:46:18.815052] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:11.505 [2024-12-12 05:46:18.815101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:11.505 [2024-12-12 05:46:18.815112] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:11.505 05:46:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63194 00:08:11.505 [2024-12-12 05:46:19.014707] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.888 05:46:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:12.888 00:08:12.888 real 0m4.395s 00:08:12.888 user 0m6.197s 00:08:12.888 sys 0m0.695s 00:08:12.888 ************************************ 00:08:12.888 END TEST raid_superblock_test 00:08:12.888 ************************************ 00:08:12.888 05:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:12.888 05:46:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.888 05:46:20 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:12.888 05:46:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:12.888 05:46:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:12.888 05:46:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.888 ************************************ 00:08:12.888 START TEST raid_read_error_test 00:08:12.888 ************************************ 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.VS69Gspxzm 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63405 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63405 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63405 ']' 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.888 05:46:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.888 [2024-12-12 05:46:20.259400] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:12.888 [2024-12-12 05:46:20.259530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63405 ] 00:08:13.148 [2024-12-12 05:46:20.415641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.148 [2024-12-12 05:46:20.521491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.409 [2024-12-12 05:46:20.717355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.409 [2024-12-12 05:46:20.717391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.669 BaseBdev1_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.669 true 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.669 [2024-12-12 05:46:21.137567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:13.669 [2024-12-12 05:46:21.137618] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.669 [2024-12-12 05:46:21.137637] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:13.669 [2024-12-12 05:46:21.137647] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.669 [2024-12-12 05:46:21.139802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.669 [2024-12-12 05:46:21.139844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:13.669 BaseBdev1 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.669 BaseBdev2_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.669 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 true 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 [2024-12-12 05:46:21.204223] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:13.930 [2024-12-12 05:46:21.204277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.930 [2024-12-12 05:46:21.204292] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:13.930 [2024-12-12 05:46:21.204302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.930 [2024-12-12 05:46:21.206355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.930 [2024-12-12 05:46:21.206395] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:13.930 BaseBdev2 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 [2024-12-12 05:46:21.216258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:13.930 [2024-12-12 05:46:21.218085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:13.930 [2024-12-12 05:46:21.218272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:13.930 [2024-12-12 05:46:21.218288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:13.930 [2024-12-12 05:46:21.218513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:13.930 [2024-12-12 05:46:21.218709] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:13.930 [2024-12-12 05:46:21.218732] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:13.930 [2024-12-12 05:46:21.218875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:13.930 "name": "raid_bdev1", 00:08:13.930 "uuid": "2a918eb0-588e-43d4-85b2-a2a4304cbb49", 00:08:13.930 "strip_size_kb": 64, 00:08:13.930 "state": "online", 00:08:13.930 "raid_level": "concat", 00:08:13.930 "superblock": true, 00:08:13.930 "num_base_bdevs": 2, 00:08:13.930 "num_base_bdevs_discovered": 2, 00:08:13.930 "num_base_bdevs_operational": 2, 00:08:13.930 "base_bdevs_list": [ 00:08:13.930 { 00:08:13.930 "name": "BaseBdev1", 00:08:13.930 "uuid": "4dc972f1-0704-5e2d-a301-e9f7db529c84", 00:08:13.930 "is_configured": true, 00:08:13.930 "data_offset": 2048, 00:08:13.930 "data_size": 63488 00:08:13.930 }, 00:08:13.930 { 00:08:13.930 "name": "BaseBdev2", 00:08:13.930 "uuid": "fe8967f0-d8c0-5f0b-ac22-82f3f3923a47", 00:08:13.930 "is_configured": true, 00:08:13.930 "data_offset": 2048, 00:08:13.930 "data_size": 63488 00:08:13.930 } 00:08:13.930 ] 00:08:13.930 }' 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:13.930 05:46:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.190 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:14.190 05:46:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:14.450 [2024-12-12 05:46:21.716592] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.391 "name": "raid_bdev1", 00:08:15.391 "uuid": "2a918eb0-588e-43d4-85b2-a2a4304cbb49", 00:08:15.391 "strip_size_kb": 64, 00:08:15.391 "state": "online", 00:08:15.391 "raid_level": "concat", 00:08:15.391 "superblock": true, 00:08:15.391 "num_base_bdevs": 2, 00:08:15.391 "num_base_bdevs_discovered": 2, 00:08:15.391 "num_base_bdevs_operational": 2, 00:08:15.391 "base_bdevs_list": [ 00:08:15.391 { 00:08:15.391 "name": "BaseBdev1", 00:08:15.391 "uuid": "4dc972f1-0704-5e2d-a301-e9f7db529c84", 00:08:15.391 "is_configured": true, 00:08:15.391 "data_offset": 2048, 00:08:15.391 "data_size": 63488 00:08:15.391 }, 00:08:15.391 { 00:08:15.391 "name": "BaseBdev2", 00:08:15.391 "uuid": "fe8967f0-d8c0-5f0b-ac22-82f3f3923a47", 00:08:15.391 "is_configured": true, 00:08:15.391 "data_offset": 2048, 00:08:15.391 "data_size": 63488 00:08:15.391 } 00:08:15.391 ] 00:08:15.391 }' 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.391 05:46:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.651 [2024-12-12 05:46:23.060105] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:15.651 [2024-12-12 05:46:23.060142] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:15.651 [2024-12-12 05:46:23.062816] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:15.651 [2024-12-12 05:46:23.062968] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:15.651 [2024-12-12 05:46:23.063012] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:15.651 [2024-12-12 05:46:23.063026] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:15.651 { 00:08:15.651 "results": [ 00:08:15.651 { 00:08:15.651 "job": "raid_bdev1", 00:08:15.651 "core_mask": "0x1", 00:08:15.651 "workload": "randrw", 00:08:15.651 "percentage": 50, 00:08:15.651 "status": "finished", 00:08:15.651 "queue_depth": 1, 00:08:15.651 "io_size": 131072, 00:08:15.651 "runtime": 1.344389, 00:08:15.651 "iops": 16187.279128288017, 00:08:15.651 "mibps": 2023.4098910360021, 00:08:15.651 "io_failed": 1, 00:08:15.651 "io_timeout": 0, 00:08:15.651 "avg_latency_us": 85.39170271565838, 00:08:15.651 "min_latency_us": 26.606113537117903, 00:08:15.651 "max_latency_us": 1395.1441048034935 00:08:15.651 } 00:08:15.651 ], 00:08:15.651 "core_count": 1 00:08:15.651 } 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63405 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63405 ']' 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63405 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63405 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63405' 00:08:15.651 killing process with pid 63405 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63405 00:08:15.651 [2024-12-12 05:46:23.111101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:15.651 05:46:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63405 00:08:15.911 [2024-12-12 05:46:23.244388] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:17.293 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:17.293 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.VS69Gspxzm 00:08:17.293 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:17.293 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:17.293 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:17.294 ************************************ 00:08:17.294 END TEST raid_read_error_test 00:08:17.294 ************************************ 00:08:17.294 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:17.294 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:17.294 05:46:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:17.294 00:08:17.294 real 0m4.226s 00:08:17.294 user 0m5.023s 00:08:17.294 sys 0m0.530s 00:08:17.294 05:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.294 05:46:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.294 05:46:24 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:17.294 05:46:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:17.294 05:46:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.294 05:46:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:17.294 ************************************ 00:08:17.294 START TEST raid_write_error_test 00:08:17.294 ************************************ 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u4E197S8zf 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63545 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63545 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63545 ']' 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.294 05:46:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.294 [2024-12-12 05:46:24.554319] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:17.294 [2024-12-12 05:46:24.554528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63545 ] 00:08:17.294 [2024-12-12 05:46:24.725825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.554 [2024-12-12 05:46:24.839109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.554 [2024-12-12 05:46:25.026144] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.554 [2024-12-12 05:46:25.026195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.124 BaseBdev1_malloc 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.124 true 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.124 [2024-12-12 05:46:25.438168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.124 [2024-12-12 05:46:25.438227] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.124 [2024-12-12 05:46:25.438246] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:18.124 [2024-12-12 05:46:25.438256] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.124 [2024-12-12 05:46:25.440372] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.124 [2024-12-12 05:46:25.440453] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.124 BaseBdev1 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.124 BaseBdev2_malloc 00:08:18.124 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.125 true 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.125 [2024-12-12 05:46:25.502027] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:18.125 [2024-12-12 05:46:25.502080] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.125 [2024-12-12 05:46:25.502095] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:18.125 [2024-12-12 05:46:25.502104] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.125 [2024-12-12 05:46:25.504125] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.125 [2024-12-12 05:46:25.504164] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:18.125 BaseBdev2 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.125 [2024-12-12 05:46:25.514064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:18.125 [2024-12-12 05:46:25.515810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:18.125 [2024-12-12 05:46:25.515984] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:18.125 [2024-12-12 05:46:25.516000] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:18.125 [2024-12-12 05:46:25.516207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:18.125 [2024-12-12 05:46:25.516370] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:18.125 [2024-12-12 05:46:25.516382] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:18.125 [2024-12-12 05:46:25.516536] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.125 "name": "raid_bdev1", 00:08:18.125 "uuid": "2c7db167-3e98-43d9-8d71-cb12e41cd17d", 00:08:18.125 "strip_size_kb": 64, 00:08:18.125 "state": "online", 00:08:18.125 "raid_level": "concat", 00:08:18.125 "superblock": true, 00:08:18.125 "num_base_bdevs": 2, 00:08:18.125 "num_base_bdevs_discovered": 2, 00:08:18.125 "num_base_bdevs_operational": 2, 00:08:18.125 "base_bdevs_list": [ 00:08:18.125 { 00:08:18.125 "name": "BaseBdev1", 00:08:18.125 "uuid": "c2b1bca3-37be-56a4-949d-69679d763d4c", 00:08:18.125 "is_configured": true, 00:08:18.125 "data_offset": 2048, 00:08:18.125 "data_size": 63488 00:08:18.125 }, 00:08:18.125 { 00:08:18.125 "name": "BaseBdev2", 00:08:18.125 "uuid": "dc6b8de4-2488-5296-9676-7570d2e175ba", 00:08:18.125 "is_configured": true, 00:08:18.125 "data_offset": 2048, 00:08:18.125 "data_size": 63488 00:08:18.125 } 00:08:18.125 ] 00:08:18.125 }' 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.125 05:46:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.694 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:18.694 05:46:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:18.694 [2024-12-12 05:46:26.038508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:19.632 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.633 05:46:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:19.633 05:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:19.633 "name": "raid_bdev1", 00:08:19.633 "uuid": "2c7db167-3e98-43d9-8d71-cb12e41cd17d", 00:08:19.633 "strip_size_kb": 64, 00:08:19.633 "state": "online", 00:08:19.633 "raid_level": "concat", 00:08:19.633 "superblock": true, 00:08:19.633 "num_base_bdevs": 2, 00:08:19.633 "num_base_bdevs_discovered": 2, 00:08:19.633 "num_base_bdevs_operational": 2, 00:08:19.633 "base_bdevs_list": [ 00:08:19.633 { 00:08:19.633 "name": "BaseBdev1", 00:08:19.633 "uuid": "c2b1bca3-37be-56a4-949d-69679d763d4c", 00:08:19.633 "is_configured": true, 00:08:19.633 "data_offset": 2048, 00:08:19.633 "data_size": 63488 00:08:19.633 }, 00:08:19.633 { 00:08:19.633 "name": "BaseBdev2", 00:08:19.633 "uuid": "dc6b8de4-2488-5296-9676-7570d2e175ba", 00:08:19.633 "is_configured": true, 00:08:19.633 "data_offset": 2048, 00:08:19.633 "data_size": 63488 00:08:19.633 } 00:08:19.633 ] 00:08:19.633 }' 00:08:19.633 05:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:19.633 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 05:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:19.892 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:19.892 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.892 [2024-12-12 05:46:27.410516] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:19.892 [2024-12-12 05:46:27.410553] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:19.892 [2024-12-12 05:46:27.413145] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:20.152 [2024-12-12 05:46:27.413234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:20.152 [2024-12-12 05:46:27.413273] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:20.152 [2024-12-12 05:46:27.413286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:20.152 { 00:08:20.152 "results": [ 00:08:20.152 { 00:08:20.152 "job": "raid_bdev1", 00:08:20.152 "core_mask": "0x1", 00:08:20.152 "workload": "randrw", 00:08:20.152 "percentage": 50, 00:08:20.152 "status": "finished", 00:08:20.152 "queue_depth": 1, 00:08:20.152 "io_size": 131072, 00:08:20.152 "runtime": 1.372915, 00:08:20.152 "iops": 16285.786082896611, 00:08:20.152 "mibps": 2035.7232603620764, 00:08:20.152 "io_failed": 1, 00:08:20.152 "io_timeout": 0, 00:08:20.152 "avg_latency_us": 84.8451736178922, 00:08:20.152 "min_latency_us": 25.6, 00:08:20.152 "max_latency_us": 1380.8349344978167 00:08:20.152 } 00:08:20.152 ], 00:08:20.152 "core_count": 1 00:08:20.152 } 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63545 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63545 ']' 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63545 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63545 00:08:20.152 killing process with pid 63545 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63545' 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63545 00:08:20.152 [2024-12-12 05:46:27.453027] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:20.152 05:46:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63545 00:08:20.152 [2024-12-12 05:46:27.585721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u4E197S8zf 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:21.541 00:08:21.541 real 0m4.271s 00:08:21.541 user 0m5.129s 00:08:21.541 sys 0m0.514s 00:08:21.541 ************************************ 00:08:21.541 END TEST raid_write_error_test 00:08:21.541 ************************************ 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.541 05:46:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.541 05:46:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:21.541 05:46:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:21.541 05:46:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:21.541 05:46:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.541 05:46:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.541 ************************************ 00:08:21.541 START TEST raid_state_function_test 00:08:21.541 ************************************ 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63683 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63683' 00:08:21.541 Process raid pid: 63683 00:08:21.541 05:46:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63683 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63683 ']' 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.542 05:46:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.542 [2024-12-12 05:46:28.890849] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:21.542 [2024-12-12 05:46:28.891066] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.801 [2024-12-12 05:46:29.064275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.801 [2024-12-12 05:46:29.173125] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.061 [2024-12-12 05:46:29.362188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.061 [2024-12-12 05:46:29.362321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.321 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.321 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:22.321 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.321 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.321 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.321 [2024-12-12 05:46:29.710628] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.321 [2024-12-12 05:46:29.710680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.321 [2024-12-12 05:46:29.710690] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.321 [2024-12-12 05:46:29.710700] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.322 "name": "Existed_Raid", 00:08:22.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.322 "strip_size_kb": 0, 00:08:22.322 "state": "configuring", 00:08:22.322 "raid_level": "raid1", 00:08:22.322 "superblock": false, 00:08:22.322 "num_base_bdevs": 2, 00:08:22.322 "num_base_bdevs_discovered": 0, 00:08:22.322 "num_base_bdevs_operational": 2, 00:08:22.322 "base_bdevs_list": [ 00:08:22.322 { 00:08:22.322 "name": "BaseBdev1", 00:08:22.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.322 "is_configured": false, 00:08:22.322 "data_offset": 0, 00:08:22.322 "data_size": 0 00:08:22.322 }, 00:08:22.322 { 00:08:22.322 "name": "BaseBdev2", 00:08:22.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.322 "is_configured": false, 00:08:22.322 "data_offset": 0, 00:08:22.322 "data_size": 0 00:08:22.322 } 00:08:22.322 ] 00:08:22.322 }' 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.322 05:46:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.891 [2024-12-12 05:46:30.165788] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.891 [2024-12-12 05:46:30.165872] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.891 [2024-12-12 05:46:30.177751] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:22.891 [2024-12-12 05:46:30.177833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:22.891 [2024-12-12 05:46:30.177860] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.891 [2024-12-12 05:46:30.177884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.891 [2024-12-12 05:46:30.225144] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.891 BaseBdev1 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:22.891 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.892 [ 00:08:22.892 { 00:08:22.892 "name": "BaseBdev1", 00:08:22.892 "aliases": [ 00:08:22.892 "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c" 00:08:22.892 ], 00:08:22.892 "product_name": "Malloc disk", 00:08:22.892 "block_size": 512, 00:08:22.892 "num_blocks": 65536, 00:08:22.892 "uuid": "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c", 00:08:22.892 "assigned_rate_limits": { 00:08:22.892 "rw_ios_per_sec": 0, 00:08:22.892 "rw_mbytes_per_sec": 0, 00:08:22.892 "r_mbytes_per_sec": 0, 00:08:22.892 "w_mbytes_per_sec": 0 00:08:22.892 }, 00:08:22.892 "claimed": true, 00:08:22.892 "claim_type": "exclusive_write", 00:08:22.892 "zoned": false, 00:08:22.892 "supported_io_types": { 00:08:22.892 "read": true, 00:08:22.892 "write": true, 00:08:22.892 "unmap": true, 00:08:22.892 "flush": true, 00:08:22.892 "reset": true, 00:08:22.892 "nvme_admin": false, 00:08:22.892 "nvme_io": false, 00:08:22.892 "nvme_io_md": false, 00:08:22.892 "write_zeroes": true, 00:08:22.892 "zcopy": true, 00:08:22.892 "get_zone_info": false, 00:08:22.892 "zone_management": false, 00:08:22.892 "zone_append": false, 00:08:22.892 "compare": false, 00:08:22.892 "compare_and_write": false, 00:08:22.892 "abort": true, 00:08:22.892 "seek_hole": false, 00:08:22.892 "seek_data": false, 00:08:22.892 "copy": true, 00:08:22.892 "nvme_iov_md": false 00:08:22.892 }, 00:08:22.892 "memory_domains": [ 00:08:22.892 { 00:08:22.892 "dma_device_id": "system", 00:08:22.892 "dma_device_type": 1 00:08:22.892 }, 00:08:22.892 { 00:08:22.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.892 "dma_device_type": 2 00:08:22.892 } 00:08:22.892 ], 00:08:22.892 "driver_specific": {} 00:08:22.892 } 00:08:22.892 ] 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.892 "name": "Existed_Raid", 00:08:22.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.892 "strip_size_kb": 0, 00:08:22.892 "state": "configuring", 00:08:22.892 "raid_level": "raid1", 00:08:22.892 "superblock": false, 00:08:22.892 "num_base_bdevs": 2, 00:08:22.892 "num_base_bdevs_discovered": 1, 00:08:22.892 "num_base_bdevs_operational": 2, 00:08:22.892 "base_bdevs_list": [ 00:08:22.892 { 00:08:22.892 "name": "BaseBdev1", 00:08:22.892 "uuid": "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c", 00:08:22.892 "is_configured": true, 00:08:22.892 "data_offset": 0, 00:08:22.892 "data_size": 65536 00:08:22.892 }, 00:08:22.892 { 00:08:22.892 "name": "BaseBdev2", 00:08:22.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.892 "is_configured": false, 00:08:22.892 "data_offset": 0, 00:08:22.892 "data_size": 0 00:08:22.892 } 00:08:22.892 ] 00:08:22.892 }' 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.892 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.152 [2024-12-12 05:46:30.640509] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.152 [2024-12-12 05:46:30.640562] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.152 [2024-12-12 05:46:30.652494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:23.152 [2024-12-12 05:46:30.654284] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.152 [2024-12-12 05:46:30.654327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.152 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.412 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.412 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.412 "name": "Existed_Raid", 00:08:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.412 "strip_size_kb": 0, 00:08:23.412 "state": "configuring", 00:08:23.412 "raid_level": "raid1", 00:08:23.412 "superblock": false, 00:08:23.412 "num_base_bdevs": 2, 00:08:23.412 "num_base_bdevs_discovered": 1, 00:08:23.412 "num_base_bdevs_operational": 2, 00:08:23.412 "base_bdevs_list": [ 00:08:23.412 { 00:08:23.412 "name": "BaseBdev1", 00:08:23.412 "uuid": "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c", 00:08:23.412 "is_configured": true, 00:08:23.412 "data_offset": 0, 00:08:23.412 "data_size": 65536 00:08:23.412 }, 00:08:23.412 { 00:08:23.412 "name": "BaseBdev2", 00:08:23.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.412 "is_configured": false, 00:08:23.412 "data_offset": 0, 00:08:23.412 "data_size": 0 00:08:23.412 } 00:08:23.412 ] 00:08:23.412 }' 00:08:23.412 05:46:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.412 05:46:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 [2024-12-12 05:46:31.137383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:23.672 [2024-12-12 05:46:31.137518] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:23.672 [2024-12-12 05:46:31.137561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:23.672 [2024-12-12 05:46:31.137873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:23.672 [2024-12-12 05:46:31.138099] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:23.672 [2024-12-12 05:46:31.138147] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:23.672 [2024-12-12 05:46:31.138488] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:23.672 BaseBdev2 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.672 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.672 [ 00:08:23.672 { 00:08:23.672 "name": "BaseBdev2", 00:08:23.672 "aliases": [ 00:08:23.672 "c00a3cc9-8407-4620-beca-3323ef0432ea" 00:08:23.672 ], 00:08:23.672 "product_name": "Malloc disk", 00:08:23.672 "block_size": 512, 00:08:23.672 "num_blocks": 65536, 00:08:23.672 "uuid": "c00a3cc9-8407-4620-beca-3323ef0432ea", 00:08:23.672 "assigned_rate_limits": { 00:08:23.672 "rw_ios_per_sec": 0, 00:08:23.672 "rw_mbytes_per_sec": 0, 00:08:23.672 "r_mbytes_per_sec": 0, 00:08:23.672 "w_mbytes_per_sec": 0 00:08:23.672 }, 00:08:23.672 "claimed": true, 00:08:23.672 "claim_type": "exclusive_write", 00:08:23.672 "zoned": false, 00:08:23.672 "supported_io_types": { 00:08:23.672 "read": true, 00:08:23.672 "write": true, 00:08:23.672 "unmap": true, 00:08:23.672 "flush": true, 00:08:23.672 "reset": true, 00:08:23.672 "nvme_admin": false, 00:08:23.672 "nvme_io": false, 00:08:23.673 "nvme_io_md": false, 00:08:23.673 "write_zeroes": true, 00:08:23.673 "zcopy": true, 00:08:23.673 "get_zone_info": false, 00:08:23.673 "zone_management": false, 00:08:23.673 "zone_append": false, 00:08:23.673 "compare": false, 00:08:23.673 "compare_and_write": false, 00:08:23.673 "abort": true, 00:08:23.673 "seek_hole": false, 00:08:23.673 "seek_data": false, 00:08:23.673 "copy": true, 00:08:23.673 "nvme_iov_md": false 00:08:23.673 }, 00:08:23.673 "memory_domains": [ 00:08:23.673 { 00:08:23.673 "dma_device_id": "system", 00:08:23.673 "dma_device_type": 1 00:08:23.673 }, 00:08:23.673 { 00:08:23.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.673 "dma_device_type": 2 00:08:23.673 } 00:08:23.673 ], 00:08:23.673 "driver_specific": {} 00:08:23.673 } 00:08:23.673 ] 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:23.673 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.933 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:23.933 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.933 "name": "Existed_Raid", 00:08:23.933 "uuid": "c5dadc5f-4f40-4ac1-a999-1c933a1286bd", 00:08:23.933 "strip_size_kb": 0, 00:08:23.933 "state": "online", 00:08:23.933 "raid_level": "raid1", 00:08:23.933 "superblock": false, 00:08:23.933 "num_base_bdevs": 2, 00:08:23.933 "num_base_bdevs_discovered": 2, 00:08:23.933 "num_base_bdevs_operational": 2, 00:08:23.933 "base_bdevs_list": [ 00:08:23.933 { 00:08:23.933 "name": "BaseBdev1", 00:08:23.933 "uuid": "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c", 00:08:23.933 "is_configured": true, 00:08:23.933 "data_offset": 0, 00:08:23.933 "data_size": 65536 00:08:23.933 }, 00:08:23.933 { 00:08:23.933 "name": "BaseBdev2", 00:08:23.933 "uuid": "c00a3cc9-8407-4620-beca-3323ef0432ea", 00:08:23.933 "is_configured": true, 00:08:23.933 "data_offset": 0, 00:08:23.933 "data_size": 65536 00:08:23.933 } 00:08:23.933 ] 00:08:23.933 }' 00:08:23.933 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.933 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.193 [2024-12-12 05:46:31.608854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:24.193 "name": "Existed_Raid", 00:08:24.193 "aliases": [ 00:08:24.193 "c5dadc5f-4f40-4ac1-a999-1c933a1286bd" 00:08:24.193 ], 00:08:24.193 "product_name": "Raid Volume", 00:08:24.193 "block_size": 512, 00:08:24.193 "num_blocks": 65536, 00:08:24.193 "uuid": "c5dadc5f-4f40-4ac1-a999-1c933a1286bd", 00:08:24.193 "assigned_rate_limits": { 00:08:24.193 "rw_ios_per_sec": 0, 00:08:24.193 "rw_mbytes_per_sec": 0, 00:08:24.193 "r_mbytes_per_sec": 0, 00:08:24.193 "w_mbytes_per_sec": 0 00:08:24.193 }, 00:08:24.193 "claimed": false, 00:08:24.193 "zoned": false, 00:08:24.193 "supported_io_types": { 00:08:24.193 "read": true, 00:08:24.193 "write": true, 00:08:24.193 "unmap": false, 00:08:24.193 "flush": false, 00:08:24.193 "reset": true, 00:08:24.193 "nvme_admin": false, 00:08:24.193 "nvme_io": false, 00:08:24.193 "nvme_io_md": false, 00:08:24.193 "write_zeroes": true, 00:08:24.193 "zcopy": false, 00:08:24.193 "get_zone_info": false, 00:08:24.193 "zone_management": false, 00:08:24.193 "zone_append": false, 00:08:24.193 "compare": false, 00:08:24.193 "compare_and_write": false, 00:08:24.193 "abort": false, 00:08:24.193 "seek_hole": false, 00:08:24.193 "seek_data": false, 00:08:24.193 "copy": false, 00:08:24.193 "nvme_iov_md": false 00:08:24.193 }, 00:08:24.193 "memory_domains": [ 00:08:24.193 { 00:08:24.193 "dma_device_id": "system", 00:08:24.193 "dma_device_type": 1 00:08:24.193 }, 00:08:24.193 { 00:08:24.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.193 "dma_device_type": 2 00:08:24.193 }, 00:08:24.193 { 00:08:24.193 "dma_device_id": "system", 00:08:24.193 "dma_device_type": 1 00:08:24.193 }, 00:08:24.193 { 00:08:24.193 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.193 "dma_device_type": 2 00:08:24.193 } 00:08:24.193 ], 00:08:24.193 "driver_specific": { 00:08:24.193 "raid": { 00:08:24.193 "uuid": "c5dadc5f-4f40-4ac1-a999-1c933a1286bd", 00:08:24.193 "strip_size_kb": 0, 00:08:24.193 "state": "online", 00:08:24.193 "raid_level": "raid1", 00:08:24.193 "superblock": false, 00:08:24.193 "num_base_bdevs": 2, 00:08:24.193 "num_base_bdevs_discovered": 2, 00:08:24.193 "num_base_bdevs_operational": 2, 00:08:24.193 "base_bdevs_list": [ 00:08:24.193 { 00:08:24.193 "name": "BaseBdev1", 00:08:24.193 "uuid": "71b301ea-edeb-4f6c-9bdc-8ca9e9e3401c", 00:08:24.193 "is_configured": true, 00:08:24.193 "data_offset": 0, 00:08:24.193 "data_size": 65536 00:08:24.193 }, 00:08:24.193 { 00:08:24.193 "name": "BaseBdev2", 00:08:24.193 "uuid": "c00a3cc9-8407-4620-beca-3323ef0432ea", 00:08:24.193 "is_configured": true, 00:08:24.193 "data_offset": 0, 00:08:24.193 "data_size": 65536 00:08:24.193 } 00:08:24.193 ] 00:08:24.193 } 00:08:24.193 } 00:08:24.193 }' 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:24.193 BaseBdev2' 00:08:24.193 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 [2024-12-12 05:46:31.832283] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.454 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.714 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:24.714 "name": "Existed_Raid", 00:08:24.714 "uuid": "c5dadc5f-4f40-4ac1-a999-1c933a1286bd", 00:08:24.714 "strip_size_kb": 0, 00:08:24.714 "state": "online", 00:08:24.714 "raid_level": "raid1", 00:08:24.714 "superblock": false, 00:08:24.714 "num_base_bdevs": 2, 00:08:24.714 "num_base_bdevs_discovered": 1, 00:08:24.714 "num_base_bdevs_operational": 1, 00:08:24.714 "base_bdevs_list": [ 00:08:24.714 { 00:08:24.714 "name": null, 00:08:24.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:24.714 "is_configured": false, 00:08:24.714 "data_offset": 0, 00:08:24.714 "data_size": 65536 00:08:24.714 }, 00:08:24.714 { 00:08:24.714 "name": "BaseBdev2", 00:08:24.714 "uuid": "c00a3cc9-8407-4620-beca-3323ef0432ea", 00:08:24.714 "is_configured": true, 00:08:24.714 "data_offset": 0, 00:08:24.714 "data_size": 65536 00:08:24.714 } 00:08:24.714 ] 00:08:24.714 }' 00:08:24.714 05:46:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:24.714 05:46:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.974 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:24.975 [2024-12-12 05:46:32.398694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.975 [2024-12-12 05:46:32.398804] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:24.975 [2024-12-12 05:46:32.490807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.975 [2024-12-12 05:46:32.490933] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.975 [2024-12-12 05:46:32.490951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.975 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:25.234 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63683 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63683 ']' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63683 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63683 00:08:25.235 killing process with pid 63683 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63683' 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63683 00:08:25.235 [2024-12-12 05:46:32.584620] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:25.235 05:46:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63683 00:08:25.235 [2024-12-12 05:46:32.601350] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.174 05:46:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:26.174 00:08:26.174 real 0m4.902s 00:08:26.174 user 0m7.023s 00:08:26.174 sys 0m0.804s 00:08:26.174 05:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.174 05:46:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.174 ************************************ 00:08:26.174 END TEST raid_state_function_test 00:08:26.174 ************************************ 00:08:26.434 05:46:33 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:26.434 05:46:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:26.434 05:46:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.434 05:46:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.434 ************************************ 00:08:26.434 START TEST raid_state_function_test_sb 00:08:26.434 ************************************ 00:08:26.434 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:08:26.434 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:26.434 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:26.434 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63931 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63931' 00:08:26.435 Process raid pid: 63931 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63931 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 63931 ']' 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.435 05:46:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.435 [2024-12-12 05:46:33.862135] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:26.435 [2024-12-12 05:46:33.862260] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.695 [2024-12-12 05:46:34.035175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.695 [2024-12-12 05:46:34.152026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.955 [2024-12-12 05:46:34.357297] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.955 [2024-12-12 05:46:34.357337] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.215 [2024-12-12 05:46:34.690360] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.215 [2024-12-12 05:46:34.690428] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.215 [2024-12-12 05:46:34.690439] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.215 [2024-12-12 05:46:34.690449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.215 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.476 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.476 "name": "Existed_Raid", 00:08:27.476 "uuid": "93da0949-56db-42e1-90e4-f56f643a608b", 00:08:27.476 "strip_size_kb": 0, 00:08:27.476 "state": "configuring", 00:08:27.476 "raid_level": "raid1", 00:08:27.476 "superblock": true, 00:08:27.476 "num_base_bdevs": 2, 00:08:27.476 "num_base_bdevs_discovered": 0, 00:08:27.476 "num_base_bdevs_operational": 2, 00:08:27.476 "base_bdevs_list": [ 00:08:27.476 { 00:08:27.476 "name": "BaseBdev1", 00:08:27.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.476 "is_configured": false, 00:08:27.476 "data_offset": 0, 00:08:27.476 "data_size": 0 00:08:27.476 }, 00:08:27.476 { 00:08:27.476 "name": "BaseBdev2", 00:08:27.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:27.476 "is_configured": false, 00:08:27.476 "data_offset": 0, 00:08:27.476 "data_size": 0 00:08:27.476 } 00:08:27.476 ] 00:08:27.476 }' 00:08:27.476 05:46:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.476 05:46:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 [2024-12-12 05:46:35.133536] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:27.737 [2024-12-12 05:46:35.133614] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 [2024-12-12 05:46:35.141517] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:27.737 [2024-12-12 05:46:35.141590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:27.737 [2024-12-12 05:46:35.141617] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:27.737 [2024-12-12 05:46:35.141657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 [2024-12-12 05:46:35.183689] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:27.737 BaseBdev1 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.737 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.737 [ 00:08:27.737 { 00:08:27.737 "name": "BaseBdev1", 00:08:27.737 "aliases": [ 00:08:27.737 "d01c99a4-5f22-4298-8dfc-6dc13e792473" 00:08:27.737 ], 00:08:27.737 "product_name": "Malloc disk", 00:08:27.737 "block_size": 512, 00:08:27.737 "num_blocks": 65536, 00:08:27.737 "uuid": "d01c99a4-5f22-4298-8dfc-6dc13e792473", 00:08:27.737 "assigned_rate_limits": { 00:08:27.737 "rw_ios_per_sec": 0, 00:08:27.737 "rw_mbytes_per_sec": 0, 00:08:27.738 "r_mbytes_per_sec": 0, 00:08:27.738 "w_mbytes_per_sec": 0 00:08:27.738 }, 00:08:27.738 "claimed": true, 00:08:27.738 "claim_type": "exclusive_write", 00:08:27.738 "zoned": false, 00:08:27.738 "supported_io_types": { 00:08:27.738 "read": true, 00:08:27.738 "write": true, 00:08:27.738 "unmap": true, 00:08:27.738 "flush": true, 00:08:27.738 "reset": true, 00:08:27.738 "nvme_admin": false, 00:08:27.738 "nvme_io": false, 00:08:27.738 "nvme_io_md": false, 00:08:27.738 "write_zeroes": true, 00:08:27.738 "zcopy": true, 00:08:27.738 "get_zone_info": false, 00:08:27.738 "zone_management": false, 00:08:27.738 "zone_append": false, 00:08:27.738 "compare": false, 00:08:27.738 "compare_and_write": false, 00:08:27.738 "abort": true, 00:08:27.738 "seek_hole": false, 00:08:27.738 "seek_data": false, 00:08:27.738 "copy": true, 00:08:27.738 "nvme_iov_md": false 00:08:27.738 }, 00:08:27.738 "memory_domains": [ 00:08:27.738 { 00:08:27.738 "dma_device_id": "system", 00:08:27.738 "dma_device_type": 1 00:08:27.738 }, 00:08:27.738 { 00:08:27.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.738 "dma_device_type": 2 00:08:27.738 } 00:08:27.738 ], 00:08:27.738 "driver_specific": {} 00:08:27.738 } 00:08:27.738 ] 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.738 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.028 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.028 "name": "Existed_Raid", 00:08:28.028 "uuid": "b717f924-840b-4da5-a7cb-0abbe40d2333", 00:08:28.028 "strip_size_kb": 0, 00:08:28.028 "state": "configuring", 00:08:28.028 "raid_level": "raid1", 00:08:28.028 "superblock": true, 00:08:28.028 "num_base_bdevs": 2, 00:08:28.028 "num_base_bdevs_discovered": 1, 00:08:28.028 "num_base_bdevs_operational": 2, 00:08:28.028 "base_bdevs_list": [ 00:08:28.028 { 00:08:28.028 "name": "BaseBdev1", 00:08:28.028 "uuid": "d01c99a4-5f22-4298-8dfc-6dc13e792473", 00:08:28.028 "is_configured": true, 00:08:28.028 "data_offset": 2048, 00:08:28.028 "data_size": 63488 00:08:28.028 }, 00:08:28.028 { 00:08:28.028 "name": "BaseBdev2", 00:08:28.028 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.028 "is_configured": false, 00:08:28.028 "data_offset": 0, 00:08:28.028 "data_size": 0 00:08:28.028 } 00:08:28.028 ] 00:08:28.028 }' 00:08:28.028 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.028 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 [2024-12-12 05:46:35.619044] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:28.292 [2024-12-12 05:46:35.619168] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 [2024-12-12 05:46:35.631095] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:28.292 [2024-12-12 05:46:35.632964] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:28.292 [2024-12-12 05:46:35.633045] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.292 "name": "Existed_Raid", 00:08:28.292 "uuid": "fc7573b9-8929-4a49-a609-95a403eaa297", 00:08:28.292 "strip_size_kb": 0, 00:08:28.292 "state": "configuring", 00:08:28.292 "raid_level": "raid1", 00:08:28.292 "superblock": true, 00:08:28.292 "num_base_bdevs": 2, 00:08:28.292 "num_base_bdevs_discovered": 1, 00:08:28.292 "num_base_bdevs_operational": 2, 00:08:28.292 "base_bdevs_list": [ 00:08:28.292 { 00:08:28.292 "name": "BaseBdev1", 00:08:28.292 "uuid": "d01c99a4-5f22-4298-8dfc-6dc13e792473", 00:08:28.292 "is_configured": true, 00:08:28.292 "data_offset": 2048, 00:08:28.292 "data_size": 63488 00:08:28.292 }, 00:08:28.292 { 00:08:28.292 "name": "BaseBdev2", 00:08:28.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:28.292 "is_configured": false, 00:08:28.292 "data_offset": 0, 00:08:28.292 "data_size": 0 00:08:28.292 } 00:08:28.292 ] 00:08:28.292 }' 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.292 05:46:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.864 [2024-12-12 05:46:36.118695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:28.864 [2024-12-12 05:46:36.119054] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.864 [2024-12-12 05:46:36.119075] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:28.864 [2024-12-12 05:46:36.119365] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:28.864 [2024-12-12 05:46:36.119578] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.864 [2024-12-12 05:46:36.119596] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:28.864 BaseBdev2 00:08:28.864 [2024-12-12 05:46:36.119769] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.864 [ 00:08:28.864 { 00:08:28.864 "name": "BaseBdev2", 00:08:28.864 "aliases": [ 00:08:28.864 "ac2efb62-2118-459d-8ba4-01b5e7ee2a94" 00:08:28.864 ], 00:08:28.864 "product_name": "Malloc disk", 00:08:28.864 "block_size": 512, 00:08:28.864 "num_blocks": 65536, 00:08:28.864 "uuid": "ac2efb62-2118-459d-8ba4-01b5e7ee2a94", 00:08:28.864 "assigned_rate_limits": { 00:08:28.864 "rw_ios_per_sec": 0, 00:08:28.864 "rw_mbytes_per_sec": 0, 00:08:28.864 "r_mbytes_per_sec": 0, 00:08:28.864 "w_mbytes_per_sec": 0 00:08:28.864 }, 00:08:28.864 "claimed": true, 00:08:28.864 "claim_type": "exclusive_write", 00:08:28.864 "zoned": false, 00:08:28.864 "supported_io_types": { 00:08:28.864 "read": true, 00:08:28.864 "write": true, 00:08:28.864 "unmap": true, 00:08:28.864 "flush": true, 00:08:28.864 "reset": true, 00:08:28.864 "nvme_admin": false, 00:08:28.864 "nvme_io": false, 00:08:28.864 "nvme_io_md": false, 00:08:28.864 "write_zeroes": true, 00:08:28.864 "zcopy": true, 00:08:28.864 "get_zone_info": false, 00:08:28.864 "zone_management": false, 00:08:28.864 "zone_append": false, 00:08:28.864 "compare": false, 00:08:28.864 "compare_and_write": false, 00:08:28.864 "abort": true, 00:08:28.864 "seek_hole": false, 00:08:28.864 "seek_data": false, 00:08:28.864 "copy": true, 00:08:28.864 "nvme_iov_md": false 00:08:28.864 }, 00:08:28.864 "memory_domains": [ 00:08:28.864 { 00:08:28.864 "dma_device_id": "system", 00:08:28.864 "dma_device_type": 1 00:08:28.864 }, 00:08:28.864 { 00:08:28.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.864 "dma_device_type": 2 00:08:28.864 } 00:08:28.864 ], 00:08:28.864 "driver_specific": {} 00:08:28.864 } 00:08:28.864 ] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.864 "name": "Existed_Raid", 00:08:28.864 "uuid": "fc7573b9-8929-4a49-a609-95a403eaa297", 00:08:28.864 "strip_size_kb": 0, 00:08:28.864 "state": "online", 00:08:28.864 "raid_level": "raid1", 00:08:28.864 "superblock": true, 00:08:28.864 "num_base_bdevs": 2, 00:08:28.864 "num_base_bdevs_discovered": 2, 00:08:28.864 "num_base_bdevs_operational": 2, 00:08:28.864 "base_bdevs_list": [ 00:08:28.864 { 00:08:28.864 "name": "BaseBdev1", 00:08:28.864 "uuid": "d01c99a4-5f22-4298-8dfc-6dc13e792473", 00:08:28.864 "is_configured": true, 00:08:28.864 "data_offset": 2048, 00:08:28.864 "data_size": 63488 00:08:28.864 }, 00:08:28.864 { 00:08:28.864 "name": "BaseBdev2", 00:08:28.864 "uuid": "ac2efb62-2118-459d-8ba4-01b5e7ee2a94", 00:08:28.864 "is_configured": true, 00:08:28.864 "data_offset": 2048, 00:08:28.864 "data_size": 63488 00:08:28.864 } 00:08:28.864 ] 00:08:28.864 }' 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.864 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.124 [2024-12-12 05:46:36.606177] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.124 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:29.124 "name": "Existed_Raid", 00:08:29.124 "aliases": [ 00:08:29.124 "fc7573b9-8929-4a49-a609-95a403eaa297" 00:08:29.124 ], 00:08:29.124 "product_name": "Raid Volume", 00:08:29.124 "block_size": 512, 00:08:29.124 "num_blocks": 63488, 00:08:29.124 "uuid": "fc7573b9-8929-4a49-a609-95a403eaa297", 00:08:29.124 "assigned_rate_limits": { 00:08:29.124 "rw_ios_per_sec": 0, 00:08:29.124 "rw_mbytes_per_sec": 0, 00:08:29.124 "r_mbytes_per_sec": 0, 00:08:29.124 "w_mbytes_per_sec": 0 00:08:29.124 }, 00:08:29.124 "claimed": false, 00:08:29.124 "zoned": false, 00:08:29.124 "supported_io_types": { 00:08:29.124 "read": true, 00:08:29.124 "write": true, 00:08:29.124 "unmap": false, 00:08:29.124 "flush": false, 00:08:29.124 "reset": true, 00:08:29.124 "nvme_admin": false, 00:08:29.124 "nvme_io": false, 00:08:29.124 "nvme_io_md": false, 00:08:29.124 "write_zeroes": true, 00:08:29.124 "zcopy": false, 00:08:29.124 "get_zone_info": false, 00:08:29.124 "zone_management": false, 00:08:29.124 "zone_append": false, 00:08:29.124 "compare": false, 00:08:29.124 "compare_and_write": false, 00:08:29.124 "abort": false, 00:08:29.124 "seek_hole": false, 00:08:29.124 "seek_data": false, 00:08:29.124 "copy": false, 00:08:29.124 "nvme_iov_md": false 00:08:29.124 }, 00:08:29.124 "memory_domains": [ 00:08:29.124 { 00:08:29.124 "dma_device_id": "system", 00:08:29.124 "dma_device_type": 1 00:08:29.124 }, 00:08:29.124 { 00:08:29.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.124 "dma_device_type": 2 00:08:29.124 }, 00:08:29.124 { 00:08:29.124 "dma_device_id": "system", 00:08:29.124 "dma_device_type": 1 00:08:29.124 }, 00:08:29.124 { 00:08:29.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:29.124 "dma_device_type": 2 00:08:29.124 } 00:08:29.124 ], 00:08:29.124 "driver_specific": { 00:08:29.124 "raid": { 00:08:29.124 "uuid": "fc7573b9-8929-4a49-a609-95a403eaa297", 00:08:29.124 "strip_size_kb": 0, 00:08:29.124 "state": "online", 00:08:29.124 "raid_level": "raid1", 00:08:29.124 "superblock": true, 00:08:29.124 "num_base_bdevs": 2, 00:08:29.124 "num_base_bdevs_discovered": 2, 00:08:29.124 "num_base_bdevs_operational": 2, 00:08:29.124 "base_bdevs_list": [ 00:08:29.124 { 00:08:29.124 "name": "BaseBdev1", 00:08:29.124 "uuid": "d01c99a4-5f22-4298-8dfc-6dc13e792473", 00:08:29.124 "is_configured": true, 00:08:29.124 "data_offset": 2048, 00:08:29.124 "data_size": 63488 00:08:29.124 }, 00:08:29.124 { 00:08:29.124 "name": "BaseBdev2", 00:08:29.124 "uuid": "ac2efb62-2118-459d-8ba4-01b5e7ee2a94", 00:08:29.124 "is_configured": true, 00:08:29.124 "data_offset": 2048, 00:08:29.124 "data_size": 63488 00:08:29.124 } 00:08:29.124 ] 00:08:29.124 } 00:08:29.124 } 00:08:29.124 }' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:29.384 BaseBdev2' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.384 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.384 [2024-12-12 05:46:36.813588] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:29.644 "name": "Existed_Raid", 00:08:29.644 "uuid": "fc7573b9-8929-4a49-a609-95a403eaa297", 00:08:29.644 "strip_size_kb": 0, 00:08:29.644 "state": "online", 00:08:29.644 "raid_level": "raid1", 00:08:29.644 "superblock": true, 00:08:29.644 "num_base_bdevs": 2, 00:08:29.644 "num_base_bdevs_discovered": 1, 00:08:29.644 "num_base_bdevs_operational": 1, 00:08:29.644 "base_bdevs_list": [ 00:08:29.644 { 00:08:29.644 "name": null, 00:08:29.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.644 "is_configured": false, 00:08:29.644 "data_offset": 0, 00:08:29.644 "data_size": 63488 00:08:29.644 }, 00:08:29.644 { 00:08:29.644 "name": "BaseBdev2", 00:08:29.644 "uuid": "ac2efb62-2118-459d-8ba4-01b5e7ee2a94", 00:08:29.644 "is_configured": true, 00:08:29.644 "data_offset": 2048, 00:08:29.644 "data_size": 63488 00:08:29.644 } 00:08:29.644 ] 00:08:29.644 }' 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:29.644 05:46:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.904 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.904 [2024-12-12 05:46:37.397135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:29.904 [2024-12-12 05:46:37.397238] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.163 [2024-12-12 05:46:37.488917] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.163 [2024-12-12 05:46:37.488981] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.163 [2024-12-12 05:46:37.488994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63931 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 63931 ']' 00:08:30.163 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 63931 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63931 00:08:30.164 killing process with pid 63931 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63931' 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 63931 00:08:30.164 [2024-12-12 05:46:37.573527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.164 05:46:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 63931 00:08:30.164 [2024-12-12 05:46:37.589177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:31.543 05:46:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:31.543 00:08:31.543 real 0m4.909s 00:08:31.543 user 0m7.083s 00:08:31.543 sys 0m0.784s 00:08:31.543 05:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.543 ************************************ 00:08:31.543 END TEST raid_state_function_test_sb 00:08:31.543 ************************************ 00:08:31.543 05:46:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 05:46:38 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:31.543 05:46:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:31.543 05:46:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.543 05:46:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 ************************************ 00:08:31.543 START TEST raid_superblock_test 00:08:31.543 ************************************ 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64183 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64183 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64183 ']' 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.543 05:46:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.543 [2024-12-12 05:46:38.831665] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:31.543 [2024-12-12 05:46:38.831885] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64183 ] 00:08:31.543 [2024-12-12 05:46:39.003239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.803 [2024-12-12 05:46:39.115701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.803 [2024-12-12 05:46:39.316219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.803 [2024-12-12 05:46:39.316333] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.372 malloc1 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.372 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.373 [2024-12-12 05:46:39.710214] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.373 [2024-12-12 05:46:39.710321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.373 [2024-12-12 05:46:39.710377] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:32.373 [2024-12-12 05:46:39.710406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.373 [2024-12-12 05:46:39.712562] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.373 [2024-12-12 05:46:39.712631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.373 pt1 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.373 malloc2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.373 [2024-12-12 05:46:39.764671] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.373 [2024-12-12 05:46:39.764769] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.373 [2024-12-12 05:46:39.764808] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:32.373 [2024-12-12 05:46:39.764835] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.373 [2024-12-12 05:46:39.766883] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.373 [2024-12-12 05:46:39.766969] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.373 pt2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.373 [2024-12-12 05:46:39.776693] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.373 [2024-12-12 05:46:39.778423] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.373 [2024-12-12 05:46:39.778592] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:32.373 [2024-12-12 05:46:39.778610] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.373 [2024-12-12 05:46:39.778840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:32.373 [2024-12-12 05:46:39.778985] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:32.373 [2024-12-12 05:46:39.779000] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:32.373 [2024-12-12 05:46:39.779137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:32.373 "name": "raid_bdev1", 00:08:32.373 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:32.373 "strip_size_kb": 0, 00:08:32.373 "state": "online", 00:08:32.373 "raid_level": "raid1", 00:08:32.373 "superblock": true, 00:08:32.373 "num_base_bdevs": 2, 00:08:32.373 "num_base_bdevs_discovered": 2, 00:08:32.373 "num_base_bdevs_operational": 2, 00:08:32.373 "base_bdevs_list": [ 00:08:32.373 { 00:08:32.373 "name": "pt1", 00:08:32.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.373 "is_configured": true, 00:08:32.373 "data_offset": 2048, 00:08:32.373 "data_size": 63488 00:08:32.373 }, 00:08:32.373 { 00:08:32.373 "name": "pt2", 00:08:32.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.373 "is_configured": true, 00:08:32.373 "data_offset": 2048, 00:08:32.373 "data_size": 63488 00:08:32.373 } 00:08:32.373 ] 00:08:32.373 }' 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:32.373 05:46:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:32.942 [2024-12-12 05:46:40.236193] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:32.942 "name": "raid_bdev1", 00:08:32.942 "aliases": [ 00:08:32.942 "29dfc65b-bba1-4fdd-905a-626fda87b27b" 00:08:32.942 ], 00:08:32.942 "product_name": "Raid Volume", 00:08:32.942 "block_size": 512, 00:08:32.942 "num_blocks": 63488, 00:08:32.942 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:32.942 "assigned_rate_limits": { 00:08:32.942 "rw_ios_per_sec": 0, 00:08:32.942 "rw_mbytes_per_sec": 0, 00:08:32.942 "r_mbytes_per_sec": 0, 00:08:32.942 "w_mbytes_per_sec": 0 00:08:32.942 }, 00:08:32.942 "claimed": false, 00:08:32.942 "zoned": false, 00:08:32.942 "supported_io_types": { 00:08:32.942 "read": true, 00:08:32.942 "write": true, 00:08:32.942 "unmap": false, 00:08:32.942 "flush": false, 00:08:32.942 "reset": true, 00:08:32.942 "nvme_admin": false, 00:08:32.942 "nvme_io": false, 00:08:32.942 "nvme_io_md": false, 00:08:32.942 "write_zeroes": true, 00:08:32.942 "zcopy": false, 00:08:32.942 "get_zone_info": false, 00:08:32.942 "zone_management": false, 00:08:32.942 "zone_append": false, 00:08:32.942 "compare": false, 00:08:32.942 "compare_and_write": false, 00:08:32.942 "abort": false, 00:08:32.942 "seek_hole": false, 00:08:32.942 "seek_data": false, 00:08:32.942 "copy": false, 00:08:32.942 "nvme_iov_md": false 00:08:32.942 }, 00:08:32.942 "memory_domains": [ 00:08:32.942 { 00:08:32.942 "dma_device_id": "system", 00:08:32.942 "dma_device_type": 1 00:08:32.942 }, 00:08:32.942 { 00:08:32.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.942 "dma_device_type": 2 00:08:32.942 }, 00:08:32.942 { 00:08:32.942 "dma_device_id": "system", 00:08:32.942 "dma_device_type": 1 00:08:32.942 }, 00:08:32.942 { 00:08:32.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:32.942 "dma_device_type": 2 00:08:32.942 } 00:08:32.942 ], 00:08:32.942 "driver_specific": { 00:08:32.942 "raid": { 00:08:32.942 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:32.942 "strip_size_kb": 0, 00:08:32.942 "state": "online", 00:08:32.942 "raid_level": "raid1", 00:08:32.942 "superblock": true, 00:08:32.942 "num_base_bdevs": 2, 00:08:32.942 "num_base_bdevs_discovered": 2, 00:08:32.942 "num_base_bdevs_operational": 2, 00:08:32.942 "base_bdevs_list": [ 00:08:32.942 { 00:08:32.942 "name": "pt1", 00:08:32.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:32.942 "is_configured": true, 00:08:32.942 "data_offset": 2048, 00:08:32.942 "data_size": 63488 00:08:32.942 }, 00:08:32.942 { 00:08:32.942 "name": "pt2", 00:08:32.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:32.942 "is_configured": true, 00:08:32.942 "data_offset": 2048, 00:08:32.942 "data_size": 63488 00:08:32.942 } 00:08:32.942 ] 00:08:32.942 } 00:08:32.942 } 00:08:32.942 }' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:32.942 pt2' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:32.942 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:32.943 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:32.943 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:32.943 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.943 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.943 [2024-12-12 05:46:40.431858] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:32.943 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=29dfc65b-bba1-4fdd-905a-626fda87b27b 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 29dfc65b-bba1-4fdd-905a-626fda87b27b ']' 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 [2024-12-12 05:46:40.479470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.202 [2024-12-12 05:46:40.479497] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.202 [2024-12-12 05:46:40.479593] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.202 [2024-12-12 05:46:40.479654] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.202 [2024-12-12 05:46:40.479668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.202 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.203 [2024-12-12 05:46:40.607259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:33.203 [2024-12-12 05:46:40.609164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:33.203 [2024-12-12 05:46:40.609269] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:33.203 [2024-12-12 05:46:40.609368] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:33.203 [2024-12-12 05:46:40.609408] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.203 [2024-12-12 05:46:40.609431] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:33.203 request: 00:08:33.203 { 00:08:33.203 "name": "raid_bdev1", 00:08:33.203 "raid_level": "raid1", 00:08:33.203 "base_bdevs": [ 00:08:33.203 "malloc1", 00:08:33.203 "malloc2" 00:08:33.203 ], 00:08:33.203 "superblock": false, 00:08:33.203 "method": "bdev_raid_create", 00:08:33.203 "req_id": 1 00:08:33.203 } 00:08:33.203 Got JSON-RPC error response 00:08:33.203 response: 00:08:33.203 { 00:08:33.203 "code": -17, 00:08:33.203 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:33.203 } 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.203 [2024-12-12 05:46:40.663200] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:33.203 [2024-12-12 05:46:40.663347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.203 [2024-12-12 05:46:40.663384] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:33.203 [2024-12-12 05:46:40.663416] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.203 [2024-12-12 05:46:40.665686] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.203 [2024-12-12 05:46:40.665762] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:33.203 [2024-12-12 05:46:40.665894] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:33.203 [2024-12-12 05:46:40.666010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:33.203 pt1 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.203 "name": "raid_bdev1", 00:08:33.203 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:33.203 "strip_size_kb": 0, 00:08:33.203 "state": "configuring", 00:08:33.203 "raid_level": "raid1", 00:08:33.203 "superblock": true, 00:08:33.203 "num_base_bdevs": 2, 00:08:33.203 "num_base_bdevs_discovered": 1, 00:08:33.203 "num_base_bdevs_operational": 2, 00:08:33.203 "base_bdevs_list": [ 00:08:33.203 { 00:08:33.203 "name": "pt1", 00:08:33.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.203 "is_configured": true, 00:08:33.203 "data_offset": 2048, 00:08:33.203 "data_size": 63488 00:08:33.203 }, 00:08:33.203 { 00:08:33.203 "name": null, 00:08:33.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.203 "is_configured": false, 00:08:33.203 "data_offset": 2048, 00:08:33.203 "data_size": 63488 00:08:33.203 } 00:08:33.203 ] 00:08:33.203 }' 00:08:33.203 05:46:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.463 05:46:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.721 [2024-12-12 05:46:41.098456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:33.721 [2024-12-12 05:46:41.098542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:33.721 [2024-12-12 05:46:41.098578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:33.721 [2024-12-12 05:46:41.098589] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:33.721 [2024-12-12 05:46:41.099027] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:33.721 [2024-12-12 05:46:41.099048] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:33.721 [2024-12-12 05:46:41.099127] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:33.721 [2024-12-12 05:46:41.099153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:33.721 [2024-12-12 05:46:41.099280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:33.721 [2024-12-12 05:46:41.099290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:33.721 [2024-12-12 05:46:41.099537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:33.721 [2024-12-12 05:46:41.099696] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:33.721 [2024-12-12 05:46:41.099704] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:33.721 [2024-12-12 05:46:41.099870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.721 pt2 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:33.721 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.722 "name": "raid_bdev1", 00:08:33.722 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:33.722 "strip_size_kb": 0, 00:08:33.722 "state": "online", 00:08:33.722 "raid_level": "raid1", 00:08:33.722 "superblock": true, 00:08:33.722 "num_base_bdevs": 2, 00:08:33.722 "num_base_bdevs_discovered": 2, 00:08:33.722 "num_base_bdevs_operational": 2, 00:08:33.722 "base_bdevs_list": [ 00:08:33.722 { 00:08:33.722 "name": "pt1", 00:08:33.722 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.722 "is_configured": true, 00:08:33.722 "data_offset": 2048, 00:08:33.722 "data_size": 63488 00:08:33.722 }, 00:08:33.722 { 00:08:33.722 "name": "pt2", 00:08:33.722 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.722 "is_configured": true, 00:08:33.722 "data_offset": 2048, 00:08:33.722 "data_size": 63488 00:08:33.722 } 00:08:33.722 ] 00:08:33.722 }' 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.722 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.291 [2024-12-12 05:46:41.565911] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:34.291 "name": "raid_bdev1", 00:08:34.291 "aliases": [ 00:08:34.291 "29dfc65b-bba1-4fdd-905a-626fda87b27b" 00:08:34.291 ], 00:08:34.291 "product_name": "Raid Volume", 00:08:34.291 "block_size": 512, 00:08:34.291 "num_blocks": 63488, 00:08:34.291 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:34.291 "assigned_rate_limits": { 00:08:34.291 "rw_ios_per_sec": 0, 00:08:34.291 "rw_mbytes_per_sec": 0, 00:08:34.291 "r_mbytes_per_sec": 0, 00:08:34.291 "w_mbytes_per_sec": 0 00:08:34.291 }, 00:08:34.291 "claimed": false, 00:08:34.291 "zoned": false, 00:08:34.291 "supported_io_types": { 00:08:34.291 "read": true, 00:08:34.291 "write": true, 00:08:34.291 "unmap": false, 00:08:34.291 "flush": false, 00:08:34.291 "reset": true, 00:08:34.291 "nvme_admin": false, 00:08:34.291 "nvme_io": false, 00:08:34.291 "nvme_io_md": false, 00:08:34.291 "write_zeroes": true, 00:08:34.291 "zcopy": false, 00:08:34.291 "get_zone_info": false, 00:08:34.291 "zone_management": false, 00:08:34.291 "zone_append": false, 00:08:34.291 "compare": false, 00:08:34.291 "compare_and_write": false, 00:08:34.291 "abort": false, 00:08:34.291 "seek_hole": false, 00:08:34.291 "seek_data": false, 00:08:34.291 "copy": false, 00:08:34.291 "nvme_iov_md": false 00:08:34.291 }, 00:08:34.291 "memory_domains": [ 00:08:34.291 { 00:08:34.291 "dma_device_id": "system", 00:08:34.291 "dma_device_type": 1 00:08:34.291 }, 00:08:34.291 { 00:08:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.291 "dma_device_type": 2 00:08:34.291 }, 00:08:34.291 { 00:08:34.291 "dma_device_id": "system", 00:08:34.291 "dma_device_type": 1 00:08:34.291 }, 00:08:34.291 { 00:08:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.291 "dma_device_type": 2 00:08:34.291 } 00:08:34.291 ], 00:08:34.291 "driver_specific": { 00:08:34.291 "raid": { 00:08:34.291 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:34.291 "strip_size_kb": 0, 00:08:34.291 "state": "online", 00:08:34.291 "raid_level": "raid1", 00:08:34.291 "superblock": true, 00:08:34.291 "num_base_bdevs": 2, 00:08:34.291 "num_base_bdevs_discovered": 2, 00:08:34.291 "num_base_bdevs_operational": 2, 00:08:34.291 "base_bdevs_list": [ 00:08:34.291 { 00:08:34.291 "name": "pt1", 00:08:34.291 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.291 "is_configured": true, 00:08:34.291 "data_offset": 2048, 00:08:34.291 "data_size": 63488 00:08:34.291 }, 00:08:34.291 { 00:08:34.291 "name": "pt2", 00:08:34.291 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.291 "is_configured": true, 00:08:34.291 "data_offset": 2048, 00:08:34.291 "data_size": 63488 00:08:34.291 } 00:08:34.291 ] 00:08:34.291 } 00:08:34.291 } 00:08:34.291 }' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:34.291 pt2' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.291 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:34.551 [2024-12-12 05:46:41.813486] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 29dfc65b-bba1-4fdd-905a-626fda87b27b '!=' 29dfc65b-bba1-4fdd-905a-626fda87b27b ']' 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.551 [2024-12-12 05:46:41.845212] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.551 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:34.552 "name": "raid_bdev1", 00:08:34.552 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:34.552 "strip_size_kb": 0, 00:08:34.552 "state": "online", 00:08:34.552 "raid_level": "raid1", 00:08:34.552 "superblock": true, 00:08:34.552 "num_base_bdevs": 2, 00:08:34.552 "num_base_bdevs_discovered": 1, 00:08:34.552 "num_base_bdevs_operational": 1, 00:08:34.552 "base_bdevs_list": [ 00:08:34.552 { 00:08:34.552 "name": null, 00:08:34.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:34.552 "is_configured": false, 00:08:34.552 "data_offset": 0, 00:08:34.552 "data_size": 63488 00:08:34.552 }, 00:08:34.552 { 00:08:34.552 "name": "pt2", 00:08:34.552 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.552 "is_configured": true, 00:08:34.552 "data_offset": 2048, 00:08:34.552 "data_size": 63488 00:08:34.552 } 00:08:34.552 ] 00:08:34.552 }' 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:34.552 05:46:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 [2024-12-12 05:46:42.204618] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:34.811 [2024-12-12 05:46:42.204702] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:34.811 [2024-12-12 05:46:42.204802] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.811 [2024-12-12 05:46:42.204911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.811 [2024-12-12 05:46:42.204969] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 [2024-12-12 05:46:42.280455] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:34.811 [2024-12-12 05:46:42.280531] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:34.811 [2024-12-12 05:46:42.280551] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:34.811 [2024-12-12 05:46:42.280562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:34.811 [2024-12-12 05:46:42.282811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:34.811 [2024-12-12 05:46:42.282855] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:34.811 [2024-12-12 05:46:42.282946] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:34.811 [2024-12-12 05:46:42.282995] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:34.811 [2024-12-12 05:46:42.283103] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:34.811 [2024-12-12 05:46:42.283115] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:34.811 [2024-12-12 05:46:42.283352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:34.811 [2024-12-12 05:46:42.283545] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:34.811 [2024-12-12 05:46:42.283558] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:34.811 [2024-12-12 05:46:42.283753] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:34.811 pt2 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.811 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.070 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.070 "name": "raid_bdev1", 00:08:35.070 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:35.070 "strip_size_kb": 0, 00:08:35.070 "state": "online", 00:08:35.070 "raid_level": "raid1", 00:08:35.070 "superblock": true, 00:08:35.070 "num_base_bdevs": 2, 00:08:35.070 "num_base_bdevs_discovered": 1, 00:08:35.070 "num_base_bdevs_operational": 1, 00:08:35.070 "base_bdevs_list": [ 00:08:35.070 { 00:08:35.070 "name": null, 00:08:35.070 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.070 "is_configured": false, 00:08:35.070 "data_offset": 2048, 00:08:35.071 "data_size": 63488 00:08:35.071 }, 00:08:35.071 { 00:08:35.071 "name": "pt2", 00:08:35.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.071 "is_configured": true, 00:08:35.071 "data_offset": 2048, 00:08:35.071 "data_size": 63488 00:08:35.071 } 00:08:35.071 ] 00:08:35.071 }' 00:08:35.071 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.071 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.330 [2024-12-12 05:46:42.703699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.330 [2024-12-12 05:46:42.703775] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.330 [2024-12-12 05:46:42.703870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.330 [2024-12-12 05:46:42.703986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.330 [2024-12-12 05:46:42.704033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.330 [2024-12-12 05:46:42.763645] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:35.330 [2024-12-12 05:46:42.763710] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:35.330 [2024-12-12 05:46:42.763751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:35.330 [2024-12-12 05:46:42.763762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:35.330 [2024-12-12 05:46:42.765977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:35.330 [2024-12-12 05:46:42.766013] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:35.330 [2024-12-12 05:46:42.766108] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:35.330 [2024-12-12 05:46:42.766156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:35.330 [2024-12-12 05:46:42.766350] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:35.330 [2024-12-12 05:46:42.766364] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.330 [2024-12-12 05:46:42.766380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:35.330 [2024-12-12 05:46:42.766439] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:35.330 [2024-12-12 05:46:42.766529] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:35.330 [2024-12-12 05:46:42.766541] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:35.330 pt1 00:08:35.330 [2024-12-12 05:46:42.766810] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:35.330 [2024-12-12 05:46:42.766959] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:35.330 [2024-12-12 05:46:42.766972] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:35.330 [2024-12-12 05:46:42.767123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:35.330 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:35.331 "name": "raid_bdev1", 00:08:35.331 "uuid": "29dfc65b-bba1-4fdd-905a-626fda87b27b", 00:08:35.331 "strip_size_kb": 0, 00:08:35.331 "state": "online", 00:08:35.331 "raid_level": "raid1", 00:08:35.331 "superblock": true, 00:08:35.331 "num_base_bdevs": 2, 00:08:35.331 "num_base_bdevs_discovered": 1, 00:08:35.331 "num_base_bdevs_operational": 1, 00:08:35.331 "base_bdevs_list": [ 00:08:35.331 { 00:08:35.331 "name": null, 00:08:35.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:35.331 "is_configured": false, 00:08:35.331 "data_offset": 2048, 00:08:35.331 "data_size": 63488 00:08:35.331 }, 00:08:35.331 { 00:08:35.331 "name": "pt2", 00:08:35.331 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:35.331 "is_configured": true, 00:08:35.331 "data_offset": 2048, 00:08:35.331 "data_size": 63488 00:08:35.331 } 00:08:35.331 ] 00:08:35.331 }' 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:35.331 05:46:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:35.916 [2024-12-12 05:46:43.270936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:35.916 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 29dfc65b-bba1-4fdd-905a-626fda87b27b '!=' 29dfc65b-bba1-4fdd-905a-626fda87b27b ']' 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64183 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64183 ']' 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64183 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64183 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:35.917 killing process with pid 64183 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64183' 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64183 00:08:35.917 [2024-12-12 05:46:43.343933] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:35.917 [2024-12-12 05:46:43.344023] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.917 [2024-12-12 05:46:43.344073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.917 [2024-12-12 05:46:43.344088] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:35.917 05:46:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64183 00:08:36.187 [2024-12-12 05:46:43.546191] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.125 05:46:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:37.125 00:08:37.125 real 0m5.892s 00:08:37.125 user 0m8.872s 00:08:37.125 sys 0m1.012s 00:08:37.125 05:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:37.125 ************************************ 00:08:37.125 END TEST raid_superblock_test 00:08:37.125 ************************************ 00:08:37.126 05:46:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.385 05:46:44 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:37.385 05:46:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:37.385 05:46:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:37.385 05:46:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.385 ************************************ 00:08:37.385 START TEST raid_read_error_test 00:08:37.385 ************************************ 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Re2y8G4DDo 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64513 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64513 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 64513 ']' 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:37.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:37.385 05:46:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.385 [2024-12-12 05:46:44.803024] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:37.385 [2024-12-12 05:46:44.803139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64513 ] 00:08:37.645 [2024-12-12 05:46:44.975863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.645 [2024-12-12 05:46:45.088302] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.905 [2024-12-12 05:46:45.286353] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:37.905 [2024-12-12 05:46:45.286423] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 BaseBdev1_malloc 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 true 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.165 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.165 [2024-12-12 05:46:45.681221] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:38.165 [2024-12-12 05:46:45.681277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.165 [2024-12-12 05:46:45.681297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:38.165 [2024-12-12 05:46:45.681307] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.165 [2024-12-12 05:46:45.683345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.165 [2024-12-12 05:46:45.683388] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:38.424 BaseBdev1 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.424 BaseBdev2_malloc 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.424 true 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.424 [2024-12-12 05:46:45.750732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:38.424 [2024-12-12 05:46:45.750871] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.424 [2024-12-12 05:46:45.750900] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:38.424 [2024-12-12 05:46:45.750913] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.424 [2024-12-12 05:46:45.753421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.424 [2024-12-12 05:46:45.753476] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:38.424 BaseBdev2 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.424 [2024-12-12 05:46:45.762762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:38.424 [2024-12-12 05:46:45.764660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.424 [2024-12-12 05:46:45.764888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.424 [2024-12-12 05:46:45.764912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:38.424 [2024-12-12 05:46:45.765198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:38.424 [2024-12-12 05:46:45.765420] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.424 [2024-12-12 05:46:45.765439] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:38.424 [2024-12-12 05:46:45.765635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.424 "name": "raid_bdev1", 00:08:38.424 "uuid": "dc387fa1-aadf-4a31-b510-40d162520a47", 00:08:38.424 "strip_size_kb": 0, 00:08:38.424 "state": "online", 00:08:38.424 "raid_level": "raid1", 00:08:38.424 "superblock": true, 00:08:38.424 "num_base_bdevs": 2, 00:08:38.424 "num_base_bdevs_discovered": 2, 00:08:38.424 "num_base_bdevs_operational": 2, 00:08:38.424 "base_bdevs_list": [ 00:08:38.424 { 00:08:38.424 "name": "BaseBdev1", 00:08:38.424 "uuid": "7368d893-ec74-5280-a79a-0849991c4b78", 00:08:38.424 "is_configured": true, 00:08:38.424 "data_offset": 2048, 00:08:38.424 "data_size": 63488 00:08:38.424 }, 00:08:38.424 { 00:08:38.424 "name": "BaseBdev2", 00:08:38.424 "uuid": "5ecaf186-7753-5a94-a635-9578a25849ec", 00:08:38.424 "is_configured": true, 00:08:38.424 "data_offset": 2048, 00:08:38.424 "data_size": 63488 00:08:38.424 } 00:08:38.424 ] 00:08:38.424 }' 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.424 05:46:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.684 05:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:38.684 05:46:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:38.944 [2024-12-12 05:46:46.243249] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.883 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:39.883 "name": "raid_bdev1", 00:08:39.883 "uuid": "dc387fa1-aadf-4a31-b510-40d162520a47", 00:08:39.883 "strip_size_kb": 0, 00:08:39.883 "state": "online", 00:08:39.883 "raid_level": "raid1", 00:08:39.883 "superblock": true, 00:08:39.883 "num_base_bdevs": 2, 00:08:39.883 "num_base_bdevs_discovered": 2, 00:08:39.883 "num_base_bdevs_operational": 2, 00:08:39.883 "base_bdevs_list": [ 00:08:39.883 { 00:08:39.883 "name": "BaseBdev1", 00:08:39.883 "uuid": "7368d893-ec74-5280-a79a-0849991c4b78", 00:08:39.883 "is_configured": true, 00:08:39.883 "data_offset": 2048, 00:08:39.883 "data_size": 63488 00:08:39.883 }, 00:08:39.883 { 00:08:39.884 "name": "BaseBdev2", 00:08:39.884 "uuid": "5ecaf186-7753-5a94-a635-9578a25849ec", 00:08:39.884 "is_configured": true, 00:08:39.884 "data_offset": 2048, 00:08:39.884 "data_size": 63488 00:08:39.884 } 00:08:39.884 ] 00:08:39.884 }' 00:08:39.884 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:39.884 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.144 [2024-12-12 05:46:47.568747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.144 [2024-12-12 05:46:47.568849] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.144 [2024-12-12 05:46:47.571585] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.144 [2024-12-12 05:46:47.571687] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:40.144 [2024-12-12 05:46:47.571836] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.144 [2024-12-12 05:46:47.571893] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:40.144 { 00:08:40.144 "results": [ 00:08:40.144 { 00:08:40.144 "job": "raid_bdev1", 00:08:40.144 "core_mask": "0x1", 00:08:40.144 "workload": "randrw", 00:08:40.144 "percentage": 50, 00:08:40.144 "status": "finished", 00:08:40.144 "queue_depth": 1, 00:08:40.144 "io_size": 131072, 00:08:40.144 "runtime": 1.326454, 00:08:40.144 "iops": 17862.662406687305, 00:08:40.144 "mibps": 2232.832800835913, 00:08:40.144 "io_failed": 0, 00:08:40.144 "io_timeout": 0, 00:08:40.144 "avg_latency_us": 53.27718571908279, 00:08:40.144 "min_latency_us": 24.146724890829695, 00:08:40.144 "max_latency_us": 1438.071615720524 00:08:40.144 } 00:08:40.144 ], 00:08:40.144 "core_count": 1 00:08:40.144 } 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64513 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 64513 ']' 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 64513 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64513 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64513' 00:08:40.144 killing process with pid 64513 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 64513 00:08:40.144 05:46:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 64513 00:08:40.144 [2024-12-12 05:46:47.616092] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:40.404 [2024-12-12 05:46:47.748837] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:41.785 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Re2y8G4DDo 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:41.786 ************************************ 00:08:41.786 END TEST raid_read_error_test 00:08:41.786 ************************************ 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:41.786 00:08:41.786 real 0m4.194s 00:08:41.786 user 0m4.943s 00:08:41.786 sys 0m0.539s 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:41.786 05:46:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.786 05:46:48 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:41.786 05:46:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:41.786 05:46:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:41.786 05:46:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:41.786 ************************************ 00:08:41.786 START TEST raid_write_error_test 00:08:41.786 ************************************ 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.QjuvqKIf9W 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=64652 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 64652 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 64652 ']' 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:41.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:41.786 05:46:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.786 [2024-12-12 05:46:49.060243] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:41.786 [2024-12-12 05:46:49.060418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64652 ] 00:08:41.786 [2024-12-12 05:46:49.230722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.046 [2024-12-12 05:46:49.336292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.046 [2024-12-12 05:46:49.530350] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.046 [2024-12-12 05:46:49.530437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 BaseBdev1_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 true 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 [2024-12-12 05:46:49.934220] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:42.615 [2024-12-12 05:46:49.934278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.615 [2024-12-12 05:46:49.934313] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:42.615 [2024-12-12 05:46:49.934323] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.615 [2024-12-12 05:46:49.936344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.615 [2024-12-12 05:46:49.936384] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:42.615 BaseBdev1 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 BaseBdev2_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 true 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 [2024-12-12 05:46:49.999013] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:42.615 [2024-12-12 05:46:49.999065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.615 [2024-12-12 05:46:49.999081] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:42.615 [2024-12-12 05:46:49.999091] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.615 [2024-12-12 05:46:50.001055] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.615 [2024-12-12 05:46:50.001165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:42.615 BaseBdev2 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 [2024-12-12 05:46:50.011042] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.615 [2024-12-12 05:46:50.012859] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.615 [2024-12-12 05:46:50.013094] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.615 [2024-12-12 05:46:50.013146] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.615 [2024-12-12 05:46:50.013417] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:42.615 [2024-12-12 05:46:50.013634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.615 [2024-12-12 05:46:50.013679] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:42.615 [2024-12-12 05:46:50.013882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.615 "name": "raid_bdev1", 00:08:42.615 "uuid": "1aaedfff-af6f-4b6c-9a4d-3b405883970d", 00:08:42.615 "strip_size_kb": 0, 00:08:42.615 "state": "online", 00:08:42.615 "raid_level": "raid1", 00:08:42.615 "superblock": true, 00:08:42.615 "num_base_bdevs": 2, 00:08:42.615 "num_base_bdevs_discovered": 2, 00:08:42.615 "num_base_bdevs_operational": 2, 00:08:42.615 "base_bdevs_list": [ 00:08:42.615 { 00:08:42.615 "name": "BaseBdev1", 00:08:42.615 "uuid": "cdf8d327-cfc0-58ac-a3a8-0bc6d318496d", 00:08:42.615 "is_configured": true, 00:08:42.615 "data_offset": 2048, 00:08:42.615 "data_size": 63488 00:08:42.615 }, 00:08:42.615 { 00:08:42.615 "name": "BaseBdev2", 00:08:42.615 "uuid": "bb4fd59b-7680-5377-9d99-28aa30772aeb", 00:08:42.615 "is_configured": true, 00:08:42.615 "data_offset": 2048, 00:08:42.615 "data_size": 63488 00:08:42.615 } 00:08:42.615 ] 00:08:42.615 }' 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.615 05:46:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.185 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:43.185 05:46:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:43.185 [2024-12-12 05:46:50.507749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.140 [2024-12-12 05:46:51.424070] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:44.140 [2024-12-12 05:46:51.424133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.140 [2024-12-12 05:46:51.424324] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.140 "name": "raid_bdev1", 00:08:44.140 "uuid": "1aaedfff-af6f-4b6c-9a4d-3b405883970d", 00:08:44.140 "strip_size_kb": 0, 00:08:44.140 "state": "online", 00:08:44.140 "raid_level": "raid1", 00:08:44.140 "superblock": true, 00:08:44.140 "num_base_bdevs": 2, 00:08:44.140 "num_base_bdevs_discovered": 1, 00:08:44.140 "num_base_bdevs_operational": 1, 00:08:44.140 "base_bdevs_list": [ 00:08:44.140 { 00:08:44.140 "name": null, 00:08:44.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.140 "is_configured": false, 00:08:44.140 "data_offset": 0, 00:08:44.140 "data_size": 63488 00:08:44.140 }, 00:08:44.140 { 00:08:44.140 "name": "BaseBdev2", 00:08:44.140 "uuid": "bb4fd59b-7680-5377-9d99-28aa30772aeb", 00:08:44.140 "is_configured": true, 00:08:44.140 "data_offset": 2048, 00:08:44.140 "data_size": 63488 00:08:44.140 } 00:08:44.140 ] 00:08:44.140 }' 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.140 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:44.400 [2024-12-12 05:46:51.848535] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:44.400 [2024-12-12 05:46:51.848628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.400 [2024-12-12 05:46:51.851174] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.400 [2024-12-12 05:46:51.851255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:44.400 [2024-12-12 05:46:51.851331] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:44.400 [2024-12-12 05:46:51.851385] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:44.400 { 00:08:44.400 "results": [ 00:08:44.400 { 00:08:44.400 "job": "raid_bdev1", 00:08:44.400 "core_mask": "0x1", 00:08:44.400 "workload": "randrw", 00:08:44.400 "percentage": 50, 00:08:44.400 "status": "finished", 00:08:44.400 "queue_depth": 1, 00:08:44.400 "io_size": 131072, 00:08:44.400 "runtime": 1.341646, 00:08:44.400 "iops": 21902.20072955161, 00:08:44.400 "mibps": 2737.775091193951, 00:08:44.400 "io_failed": 0, 00:08:44.400 "io_timeout": 0, 00:08:44.400 "avg_latency_us": 43.04313227569839, 00:08:44.400 "min_latency_us": 21.463755458515283, 00:08:44.400 "max_latency_us": 1373.6803493449781 00:08:44.400 } 00:08:44.400 ], 00:08:44.400 "core_count": 1 00:08:44.400 } 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 64652 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 64652 ']' 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 64652 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64652 00:08:44.400 killing process with pid 64652 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64652' 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 64652 00:08:44.400 [2024-12-12 05:46:51.888274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:44.400 05:46:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 64652 00:08:44.660 [2024-12-12 05:46:52.018528] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.QjuvqKIf9W 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:46.042 ************************************ 00:08:46.042 END TEST raid_write_error_test 00:08:46.042 ************************************ 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:46.042 00:08:46.042 real 0m4.186s 00:08:46.042 user 0m4.962s 00:08:46.042 sys 0m0.536s 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.042 05:46:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 05:46:53 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:46.042 05:46:53 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:46.042 05:46:53 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:46.042 05:46:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:46.042 05:46:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.042 05:46:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 ************************************ 00:08:46.042 START TEST raid_state_function_test 00:08:46.042 ************************************ 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=64786 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64786' 00:08:46.042 Process raid pid: 64786 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 64786 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 64786 ']' 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.042 05:46:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.042 [2024-12-12 05:46:53.317004] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:46.042 [2024-12-12 05:46:53.317190] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.042 [2024-12-12 05:46:53.491328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.302 [2024-12-12 05:46:53.599879] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.302 [2024-12-12 05:46:53.799742] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.302 [2024-12-12 05:46:53.799856] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.872 [2024-12-12 05:46:54.139589] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.872 [2024-12-12 05:46:54.139705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.872 [2024-12-12 05:46:54.139720] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.872 [2024-12-12 05:46:54.139730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.872 [2024-12-12 05:46:54.139737] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:46.872 [2024-12-12 05:46:54.139745] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.872 "name": "Existed_Raid", 00:08:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.872 "strip_size_kb": 64, 00:08:46.872 "state": "configuring", 00:08:46.872 "raid_level": "raid0", 00:08:46.872 "superblock": false, 00:08:46.872 "num_base_bdevs": 3, 00:08:46.872 "num_base_bdevs_discovered": 0, 00:08:46.872 "num_base_bdevs_operational": 3, 00:08:46.872 "base_bdevs_list": [ 00:08:46.872 { 00:08:46.872 "name": "BaseBdev1", 00:08:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.872 "is_configured": false, 00:08:46.872 "data_offset": 0, 00:08:46.872 "data_size": 0 00:08:46.872 }, 00:08:46.872 { 00:08:46.872 "name": "BaseBdev2", 00:08:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.872 "is_configured": false, 00:08:46.872 "data_offset": 0, 00:08:46.872 "data_size": 0 00:08:46.872 }, 00:08:46.872 { 00:08:46.872 "name": "BaseBdev3", 00:08:46.872 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.872 "is_configured": false, 00:08:46.872 "data_offset": 0, 00:08:46.872 "data_size": 0 00:08:46.872 } 00:08:46.872 ] 00:08:46.872 }' 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.872 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.132 [2024-12-12 05:46:54.594748] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.132 [2024-12-12 05:46:54.594832] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.132 [2024-12-12 05:46:54.606723] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:47.132 [2024-12-12 05:46:54.606805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:47.132 [2024-12-12 05:46:54.606832] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.132 [2024-12-12 05:46:54.606856] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.132 [2024-12-12 05:46:54.606874] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.132 [2024-12-12 05:46:54.606895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:47.132 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.133 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.133 [2024-12-12 05:46:54.652410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.133 BaseBdev1 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 [ 00:08:47.392 { 00:08:47.392 "name": "BaseBdev1", 00:08:47.392 "aliases": [ 00:08:47.392 "a961004a-f57f-4e17-9927-06839a81caef" 00:08:47.392 ], 00:08:47.392 "product_name": "Malloc disk", 00:08:47.392 "block_size": 512, 00:08:47.392 "num_blocks": 65536, 00:08:47.392 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:47.392 "assigned_rate_limits": { 00:08:47.392 "rw_ios_per_sec": 0, 00:08:47.392 "rw_mbytes_per_sec": 0, 00:08:47.392 "r_mbytes_per_sec": 0, 00:08:47.392 "w_mbytes_per_sec": 0 00:08:47.392 }, 00:08:47.392 "claimed": true, 00:08:47.392 "claim_type": "exclusive_write", 00:08:47.392 "zoned": false, 00:08:47.392 "supported_io_types": { 00:08:47.392 "read": true, 00:08:47.392 "write": true, 00:08:47.392 "unmap": true, 00:08:47.392 "flush": true, 00:08:47.392 "reset": true, 00:08:47.392 "nvme_admin": false, 00:08:47.392 "nvme_io": false, 00:08:47.392 "nvme_io_md": false, 00:08:47.392 "write_zeroes": true, 00:08:47.392 "zcopy": true, 00:08:47.392 "get_zone_info": false, 00:08:47.392 "zone_management": false, 00:08:47.392 "zone_append": false, 00:08:47.392 "compare": false, 00:08:47.392 "compare_and_write": false, 00:08:47.392 "abort": true, 00:08:47.392 "seek_hole": false, 00:08:47.392 "seek_data": false, 00:08:47.392 "copy": true, 00:08:47.392 "nvme_iov_md": false 00:08:47.392 }, 00:08:47.392 "memory_domains": [ 00:08:47.392 { 00:08:47.392 "dma_device_id": "system", 00:08:47.392 "dma_device_type": 1 00:08:47.392 }, 00:08:47.392 { 00:08:47.392 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.392 "dma_device_type": 2 00:08:47.392 } 00:08:47.392 ], 00:08:47.392 "driver_specific": {} 00:08:47.392 } 00:08:47.392 ] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.392 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.392 "name": "Existed_Raid", 00:08:47.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.392 "strip_size_kb": 64, 00:08:47.392 "state": "configuring", 00:08:47.392 "raid_level": "raid0", 00:08:47.392 "superblock": false, 00:08:47.392 "num_base_bdevs": 3, 00:08:47.392 "num_base_bdevs_discovered": 1, 00:08:47.392 "num_base_bdevs_operational": 3, 00:08:47.392 "base_bdevs_list": [ 00:08:47.392 { 00:08:47.392 "name": "BaseBdev1", 00:08:47.393 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:47.393 "is_configured": true, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 65536 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": "BaseBdev2", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.393 "is_configured": false, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 0 00:08:47.393 }, 00:08:47.393 { 00:08:47.393 "name": "BaseBdev3", 00:08:47.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.393 "is_configured": false, 00:08:47.393 "data_offset": 0, 00:08:47.393 "data_size": 0 00:08:47.393 } 00:08:47.393 ] 00:08:47.393 }' 00:08:47.393 05:46:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.393 05:46:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 [2024-12-12 05:46:55.147616] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.653 [2024-12-12 05:46:55.147725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 [2024-12-12 05:46:55.155652] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.653 [2024-12-12 05:46:55.157418] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.653 [2024-12-12 05:46:55.157463] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.653 [2024-12-12 05:46:55.157473] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:47.653 [2024-12-12 05:46:55.157482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.653 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.912 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.912 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.912 "name": "Existed_Raid", 00:08:47.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.912 "strip_size_kb": 64, 00:08:47.912 "state": "configuring", 00:08:47.912 "raid_level": "raid0", 00:08:47.912 "superblock": false, 00:08:47.912 "num_base_bdevs": 3, 00:08:47.912 "num_base_bdevs_discovered": 1, 00:08:47.912 "num_base_bdevs_operational": 3, 00:08:47.912 "base_bdevs_list": [ 00:08:47.912 { 00:08:47.912 "name": "BaseBdev1", 00:08:47.912 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:47.912 "is_configured": true, 00:08:47.912 "data_offset": 0, 00:08:47.912 "data_size": 65536 00:08:47.912 }, 00:08:47.912 { 00:08:47.912 "name": "BaseBdev2", 00:08:47.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.912 "is_configured": false, 00:08:47.912 "data_offset": 0, 00:08:47.912 "data_size": 0 00:08:47.912 }, 00:08:47.912 { 00:08:47.912 "name": "BaseBdev3", 00:08:47.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.912 "is_configured": false, 00:08:47.912 "data_offset": 0, 00:08:47.912 "data_size": 0 00:08:47.912 } 00:08:47.912 ] 00:08:47.912 }' 00:08:47.912 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.912 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.172 [2024-12-12 05:46:55.647952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:48.172 BaseBdev2 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.172 [ 00:08:48.172 { 00:08:48.172 "name": "BaseBdev2", 00:08:48.172 "aliases": [ 00:08:48.172 "27d5bba4-d3bd-4d4a-b026-e4b912189bf8" 00:08:48.172 ], 00:08:48.172 "product_name": "Malloc disk", 00:08:48.172 "block_size": 512, 00:08:48.172 "num_blocks": 65536, 00:08:48.172 "uuid": "27d5bba4-d3bd-4d4a-b026-e4b912189bf8", 00:08:48.172 "assigned_rate_limits": { 00:08:48.172 "rw_ios_per_sec": 0, 00:08:48.172 "rw_mbytes_per_sec": 0, 00:08:48.172 "r_mbytes_per_sec": 0, 00:08:48.172 "w_mbytes_per_sec": 0 00:08:48.172 }, 00:08:48.172 "claimed": true, 00:08:48.172 "claim_type": "exclusive_write", 00:08:48.172 "zoned": false, 00:08:48.172 "supported_io_types": { 00:08:48.172 "read": true, 00:08:48.172 "write": true, 00:08:48.172 "unmap": true, 00:08:48.172 "flush": true, 00:08:48.172 "reset": true, 00:08:48.172 "nvme_admin": false, 00:08:48.172 "nvme_io": false, 00:08:48.172 "nvme_io_md": false, 00:08:48.172 "write_zeroes": true, 00:08:48.172 "zcopy": true, 00:08:48.172 "get_zone_info": false, 00:08:48.172 "zone_management": false, 00:08:48.172 "zone_append": false, 00:08:48.172 "compare": false, 00:08:48.172 "compare_and_write": false, 00:08:48.172 "abort": true, 00:08:48.172 "seek_hole": false, 00:08:48.172 "seek_data": false, 00:08:48.172 "copy": true, 00:08:48.172 "nvme_iov_md": false 00:08:48.172 }, 00:08:48.172 "memory_domains": [ 00:08:48.172 { 00:08:48.172 "dma_device_id": "system", 00:08:48.172 "dma_device_type": 1 00:08:48.172 }, 00:08:48.172 { 00:08:48.172 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.172 "dma_device_type": 2 00:08:48.172 } 00:08:48.172 ], 00:08:48.172 "driver_specific": {} 00:08:48.172 } 00:08:48.172 ] 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.172 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.173 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.433 "name": "Existed_Raid", 00:08:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.433 "strip_size_kb": 64, 00:08:48.433 "state": "configuring", 00:08:48.433 "raid_level": "raid0", 00:08:48.433 "superblock": false, 00:08:48.433 "num_base_bdevs": 3, 00:08:48.433 "num_base_bdevs_discovered": 2, 00:08:48.433 "num_base_bdevs_operational": 3, 00:08:48.433 "base_bdevs_list": [ 00:08:48.433 { 00:08:48.433 "name": "BaseBdev1", 00:08:48.433 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:48.433 "is_configured": true, 00:08:48.433 "data_offset": 0, 00:08:48.433 "data_size": 65536 00:08:48.433 }, 00:08:48.433 { 00:08:48.433 "name": "BaseBdev2", 00:08:48.433 "uuid": "27d5bba4-d3bd-4d4a-b026-e4b912189bf8", 00:08:48.433 "is_configured": true, 00:08:48.433 "data_offset": 0, 00:08:48.433 "data_size": 65536 00:08:48.433 }, 00:08:48.433 { 00:08:48.433 "name": "BaseBdev3", 00:08:48.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.433 "is_configured": false, 00:08:48.433 "data_offset": 0, 00:08:48.433 "data_size": 0 00:08:48.433 } 00:08:48.433 ] 00:08:48.433 }' 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.433 05:46:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.693 [2024-12-12 05:46:56.186951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:48.693 [2024-12-12 05:46:56.187067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:48.693 [2024-12-12 05:46:56.187099] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:48.693 [2024-12-12 05:46:56.187442] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:48.693 [2024-12-12 05:46:56.187683] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:48.693 [2024-12-12 05:46:56.187730] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:48.693 [2024-12-12 05:46:56.188069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.693 BaseBdev3 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.693 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.952 [ 00:08:48.952 { 00:08:48.952 "name": "BaseBdev3", 00:08:48.952 "aliases": [ 00:08:48.952 "22c230ad-fdec-4d57-9419-76dcc1cc926c" 00:08:48.952 ], 00:08:48.952 "product_name": "Malloc disk", 00:08:48.952 "block_size": 512, 00:08:48.952 "num_blocks": 65536, 00:08:48.952 "uuid": "22c230ad-fdec-4d57-9419-76dcc1cc926c", 00:08:48.952 "assigned_rate_limits": { 00:08:48.952 "rw_ios_per_sec": 0, 00:08:48.952 "rw_mbytes_per_sec": 0, 00:08:48.952 "r_mbytes_per_sec": 0, 00:08:48.952 "w_mbytes_per_sec": 0 00:08:48.952 }, 00:08:48.952 "claimed": true, 00:08:48.952 "claim_type": "exclusive_write", 00:08:48.953 "zoned": false, 00:08:48.953 "supported_io_types": { 00:08:48.953 "read": true, 00:08:48.953 "write": true, 00:08:48.953 "unmap": true, 00:08:48.953 "flush": true, 00:08:48.953 "reset": true, 00:08:48.953 "nvme_admin": false, 00:08:48.953 "nvme_io": false, 00:08:48.953 "nvme_io_md": false, 00:08:48.953 "write_zeroes": true, 00:08:48.953 "zcopy": true, 00:08:48.953 "get_zone_info": false, 00:08:48.953 "zone_management": false, 00:08:48.953 "zone_append": false, 00:08:48.953 "compare": false, 00:08:48.953 "compare_and_write": false, 00:08:48.953 "abort": true, 00:08:48.953 "seek_hole": false, 00:08:48.953 "seek_data": false, 00:08:48.953 "copy": true, 00:08:48.953 "nvme_iov_md": false 00:08:48.953 }, 00:08:48.953 "memory_domains": [ 00:08:48.953 { 00:08:48.953 "dma_device_id": "system", 00:08:48.953 "dma_device_type": 1 00:08:48.953 }, 00:08:48.953 { 00:08:48.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.953 "dma_device_type": 2 00:08:48.953 } 00:08:48.953 ], 00:08:48.953 "driver_specific": {} 00:08:48.953 } 00:08:48.953 ] 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.953 "name": "Existed_Raid", 00:08:48.953 "uuid": "30596426-51c5-4591-8704-80529086660f", 00:08:48.953 "strip_size_kb": 64, 00:08:48.953 "state": "online", 00:08:48.953 "raid_level": "raid0", 00:08:48.953 "superblock": false, 00:08:48.953 "num_base_bdevs": 3, 00:08:48.953 "num_base_bdevs_discovered": 3, 00:08:48.953 "num_base_bdevs_operational": 3, 00:08:48.953 "base_bdevs_list": [ 00:08:48.953 { 00:08:48.953 "name": "BaseBdev1", 00:08:48.953 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:48.953 "is_configured": true, 00:08:48.953 "data_offset": 0, 00:08:48.953 "data_size": 65536 00:08:48.953 }, 00:08:48.953 { 00:08:48.953 "name": "BaseBdev2", 00:08:48.953 "uuid": "27d5bba4-d3bd-4d4a-b026-e4b912189bf8", 00:08:48.953 "is_configured": true, 00:08:48.953 "data_offset": 0, 00:08:48.953 "data_size": 65536 00:08:48.953 }, 00:08:48.953 { 00:08:48.953 "name": "BaseBdev3", 00:08:48.953 "uuid": "22c230ad-fdec-4d57-9419-76dcc1cc926c", 00:08:48.953 "is_configured": true, 00:08:48.953 "data_offset": 0, 00:08:48.953 "data_size": 65536 00:08:48.953 } 00:08:48.953 ] 00:08:48.953 }' 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.953 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.213 [2024-12-12 05:46:56.654525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.213 "name": "Existed_Raid", 00:08:49.213 "aliases": [ 00:08:49.213 "30596426-51c5-4591-8704-80529086660f" 00:08:49.213 ], 00:08:49.213 "product_name": "Raid Volume", 00:08:49.213 "block_size": 512, 00:08:49.213 "num_blocks": 196608, 00:08:49.213 "uuid": "30596426-51c5-4591-8704-80529086660f", 00:08:49.213 "assigned_rate_limits": { 00:08:49.213 "rw_ios_per_sec": 0, 00:08:49.213 "rw_mbytes_per_sec": 0, 00:08:49.213 "r_mbytes_per_sec": 0, 00:08:49.213 "w_mbytes_per_sec": 0 00:08:49.213 }, 00:08:49.213 "claimed": false, 00:08:49.213 "zoned": false, 00:08:49.213 "supported_io_types": { 00:08:49.213 "read": true, 00:08:49.213 "write": true, 00:08:49.213 "unmap": true, 00:08:49.213 "flush": true, 00:08:49.213 "reset": true, 00:08:49.213 "nvme_admin": false, 00:08:49.213 "nvme_io": false, 00:08:49.213 "nvme_io_md": false, 00:08:49.213 "write_zeroes": true, 00:08:49.213 "zcopy": false, 00:08:49.213 "get_zone_info": false, 00:08:49.213 "zone_management": false, 00:08:49.213 "zone_append": false, 00:08:49.213 "compare": false, 00:08:49.213 "compare_and_write": false, 00:08:49.213 "abort": false, 00:08:49.213 "seek_hole": false, 00:08:49.213 "seek_data": false, 00:08:49.213 "copy": false, 00:08:49.213 "nvme_iov_md": false 00:08:49.213 }, 00:08:49.213 "memory_domains": [ 00:08:49.213 { 00:08:49.213 "dma_device_id": "system", 00:08:49.213 "dma_device_type": 1 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.213 "dma_device_type": 2 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "dma_device_id": "system", 00:08:49.213 "dma_device_type": 1 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.213 "dma_device_type": 2 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "dma_device_id": "system", 00:08:49.213 "dma_device_type": 1 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.213 "dma_device_type": 2 00:08:49.213 } 00:08:49.213 ], 00:08:49.213 "driver_specific": { 00:08:49.213 "raid": { 00:08:49.213 "uuid": "30596426-51c5-4591-8704-80529086660f", 00:08:49.213 "strip_size_kb": 64, 00:08:49.213 "state": "online", 00:08:49.213 "raid_level": "raid0", 00:08:49.213 "superblock": false, 00:08:49.213 "num_base_bdevs": 3, 00:08:49.213 "num_base_bdevs_discovered": 3, 00:08:49.213 "num_base_bdevs_operational": 3, 00:08:49.213 "base_bdevs_list": [ 00:08:49.213 { 00:08:49.213 "name": "BaseBdev1", 00:08:49.213 "uuid": "a961004a-f57f-4e17-9927-06839a81caef", 00:08:49.213 "is_configured": true, 00:08:49.213 "data_offset": 0, 00:08:49.213 "data_size": 65536 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "name": "BaseBdev2", 00:08:49.213 "uuid": "27d5bba4-d3bd-4d4a-b026-e4b912189bf8", 00:08:49.213 "is_configured": true, 00:08:49.213 "data_offset": 0, 00:08:49.213 "data_size": 65536 00:08:49.213 }, 00:08:49.213 { 00:08:49.213 "name": "BaseBdev3", 00:08:49.213 "uuid": "22c230ad-fdec-4d57-9419-76dcc1cc926c", 00:08:49.213 "is_configured": true, 00:08:49.213 "data_offset": 0, 00:08:49.213 "data_size": 65536 00:08:49.213 } 00:08:49.213 ] 00:08:49.213 } 00:08:49.213 } 00:08:49.213 }' 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:49.213 BaseBdev2 00:08:49.213 BaseBdev3' 00:08:49.213 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.474 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.474 [2024-12-12 05:46:56.901799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:49.474 [2024-12-12 05:46:56.901864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.474 [2024-12-12 05:46:56.901920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:49.734 05:46:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.734 "name": "Existed_Raid", 00:08:49.734 "uuid": "30596426-51c5-4591-8704-80529086660f", 00:08:49.734 "strip_size_kb": 64, 00:08:49.734 "state": "offline", 00:08:49.734 "raid_level": "raid0", 00:08:49.734 "superblock": false, 00:08:49.734 "num_base_bdevs": 3, 00:08:49.734 "num_base_bdevs_discovered": 2, 00:08:49.734 "num_base_bdevs_operational": 2, 00:08:49.734 "base_bdevs_list": [ 00:08:49.734 { 00:08:49.734 "name": null, 00:08:49.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:49.734 "is_configured": false, 00:08:49.734 "data_offset": 0, 00:08:49.734 "data_size": 65536 00:08:49.734 }, 00:08:49.734 { 00:08:49.734 "name": "BaseBdev2", 00:08:49.734 "uuid": "27d5bba4-d3bd-4d4a-b026-e4b912189bf8", 00:08:49.734 "is_configured": true, 00:08:49.734 "data_offset": 0, 00:08:49.734 "data_size": 65536 00:08:49.734 }, 00:08:49.734 { 00:08:49.734 "name": "BaseBdev3", 00:08:49.734 "uuid": "22c230ad-fdec-4d57-9419-76dcc1cc926c", 00:08:49.734 "is_configured": true, 00:08:49.734 "data_offset": 0, 00:08:49.734 "data_size": 65536 00:08:49.734 } 00:08:49.734 ] 00:08:49.734 }' 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.734 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.994 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.994 [2024-12-12 05:46:57.514575] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.254 [2024-12-12 05:46:57.665750] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:50.254 [2024-12-12 05:46:57.665844] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.254 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.514 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 BaseBdev2 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 [ 00:08:50.515 { 00:08:50.515 "name": "BaseBdev2", 00:08:50.515 "aliases": [ 00:08:50.515 "aba9101a-449a-4504-ac4c-ee2cbda5d31c" 00:08:50.515 ], 00:08:50.515 "product_name": "Malloc disk", 00:08:50.515 "block_size": 512, 00:08:50.515 "num_blocks": 65536, 00:08:50.515 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:50.515 "assigned_rate_limits": { 00:08:50.515 "rw_ios_per_sec": 0, 00:08:50.515 "rw_mbytes_per_sec": 0, 00:08:50.515 "r_mbytes_per_sec": 0, 00:08:50.515 "w_mbytes_per_sec": 0 00:08:50.515 }, 00:08:50.515 "claimed": false, 00:08:50.515 "zoned": false, 00:08:50.515 "supported_io_types": { 00:08:50.515 "read": true, 00:08:50.515 "write": true, 00:08:50.515 "unmap": true, 00:08:50.515 "flush": true, 00:08:50.515 "reset": true, 00:08:50.515 "nvme_admin": false, 00:08:50.515 "nvme_io": false, 00:08:50.515 "nvme_io_md": false, 00:08:50.515 "write_zeroes": true, 00:08:50.515 "zcopy": true, 00:08:50.515 "get_zone_info": false, 00:08:50.515 "zone_management": false, 00:08:50.515 "zone_append": false, 00:08:50.515 "compare": false, 00:08:50.515 "compare_and_write": false, 00:08:50.515 "abort": true, 00:08:50.515 "seek_hole": false, 00:08:50.515 "seek_data": false, 00:08:50.515 "copy": true, 00:08:50.515 "nvme_iov_md": false 00:08:50.515 }, 00:08:50.515 "memory_domains": [ 00:08:50.515 { 00:08:50.515 "dma_device_id": "system", 00:08:50.515 "dma_device_type": 1 00:08:50.515 }, 00:08:50.515 { 00:08:50.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.515 "dma_device_type": 2 00:08:50.515 } 00:08:50.515 ], 00:08:50.515 "driver_specific": {} 00:08:50.515 } 00:08:50.515 ] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 BaseBdev3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 [ 00:08:50.515 { 00:08:50.515 "name": "BaseBdev3", 00:08:50.515 "aliases": [ 00:08:50.515 "979c71f5-1364-4973-bc87-a1eb3de3cd65" 00:08:50.515 ], 00:08:50.515 "product_name": "Malloc disk", 00:08:50.515 "block_size": 512, 00:08:50.515 "num_blocks": 65536, 00:08:50.515 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:50.515 "assigned_rate_limits": { 00:08:50.515 "rw_ios_per_sec": 0, 00:08:50.515 "rw_mbytes_per_sec": 0, 00:08:50.515 "r_mbytes_per_sec": 0, 00:08:50.515 "w_mbytes_per_sec": 0 00:08:50.515 }, 00:08:50.515 "claimed": false, 00:08:50.515 "zoned": false, 00:08:50.515 "supported_io_types": { 00:08:50.515 "read": true, 00:08:50.515 "write": true, 00:08:50.515 "unmap": true, 00:08:50.515 "flush": true, 00:08:50.515 "reset": true, 00:08:50.515 "nvme_admin": false, 00:08:50.515 "nvme_io": false, 00:08:50.515 "nvme_io_md": false, 00:08:50.515 "write_zeroes": true, 00:08:50.515 "zcopy": true, 00:08:50.515 "get_zone_info": false, 00:08:50.515 "zone_management": false, 00:08:50.515 "zone_append": false, 00:08:50.515 "compare": false, 00:08:50.515 "compare_and_write": false, 00:08:50.515 "abort": true, 00:08:50.515 "seek_hole": false, 00:08:50.515 "seek_data": false, 00:08:50.515 "copy": true, 00:08:50.515 "nvme_iov_md": false 00:08:50.515 }, 00:08:50.515 "memory_domains": [ 00:08:50.515 { 00:08:50.515 "dma_device_id": "system", 00:08:50.515 "dma_device_type": 1 00:08:50.515 }, 00:08:50.515 { 00:08:50.515 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.515 "dma_device_type": 2 00:08:50.515 } 00:08:50.515 ], 00:08:50.515 "driver_specific": {} 00:08:50.515 } 00:08:50.515 ] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 [2024-12-12 05:46:57.978013] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:50.515 [2024-12-12 05:46:57.978110] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:50.515 [2024-12-12 05:46:57.978149] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:50.515 [2024-12-12 05:46:57.979873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.515 05:46:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.515 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.776 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:50.776 "name": "Existed_Raid", 00:08:50.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.776 "strip_size_kb": 64, 00:08:50.776 "state": "configuring", 00:08:50.776 "raid_level": "raid0", 00:08:50.776 "superblock": false, 00:08:50.776 "num_base_bdevs": 3, 00:08:50.776 "num_base_bdevs_discovered": 2, 00:08:50.776 "num_base_bdevs_operational": 3, 00:08:50.776 "base_bdevs_list": [ 00:08:50.776 { 00:08:50.776 "name": "BaseBdev1", 00:08:50.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:50.776 "is_configured": false, 00:08:50.776 "data_offset": 0, 00:08:50.776 "data_size": 0 00:08:50.776 }, 00:08:50.776 { 00:08:50.776 "name": "BaseBdev2", 00:08:50.776 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:50.776 "is_configured": true, 00:08:50.776 "data_offset": 0, 00:08:50.776 "data_size": 65536 00:08:50.776 }, 00:08:50.776 { 00:08:50.776 "name": "BaseBdev3", 00:08:50.776 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:50.776 "is_configured": true, 00:08:50.776 "data_offset": 0, 00:08:50.776 "data_size": 65536 00:08:50.776 } 00:08:50.776 ] 00:08:50.776 }' 00:08:50.776 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:50.776 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.036 [2024-12-12 05:46:58.397305] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.036 "name": "Existed_Raid", 00:08:51.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.036 "strip_size_kb": 64, 00:08:51.036 "state": "configuring", 00:08:51.036 "raid_level": "raid0", 00:08:51.036 "superblock": false, 00:08:51.036 "num_base_bdevs": 3, 00:08:51.036 "num_base_bdevs_discovered": 1, 00:08:51.036 "num_base_bdevs_operational": 3, 00:08:51.036 "base_bdevs_list": [ 00:08:51.036 { 00:08:51.036 "name": "BaseBdev1", 00:08:51.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.036 "is_configured": false, 00:08:51.036 "data_offset": 0, 00:08:51.036 "data_size": 0 00:08:51.036 }, 00:08:51.036 { 00:08:51.036 "name": null, 00:08:51.036 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:51.036 "is_configured": false, 00:08:51.036 "data_offset": 0, 00:08:51.036 "data_size": 65536 00:08:51.036 }, 00:08:51.036 { 00:08:51.036 "name": "BaseBdev3", 00:08:51.036 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:51.036 "is_configured": true, 00:08:51.036 "data_offset": 0, 00:08:51.036 "data_size": 65536 00:08:51.036 } 00:08:51.036 ] 00:08:51.036 }' 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.036 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 [2024-12-12 05:46:58.895371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:51.612 BaseBdev1 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 [ 00:08:51.612 { 00:08:51.612 "name": "BaseBdev1", 00:08:51.612 "aliases": [ 00:08:51.612 "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc" 00:08:51.612 ], 00:08:51.612 "product_name": "Malloc disk", 00:08:51.612 "block_size": 512, 00:08:51.612 "num_blocks": 65536, 00:08:51.612 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:51.612 "assigned_rate_limits": { 00:08:51.612 "rw_ios_per_sec": 0, 00:08:51.612 "rw_mbytes_per_sec": 0, 00:08:51.612 "r_mbytes_per_sec": 0, 00:08:51.612 "w_mbytes_per_sec": 0 00:08:51.612 }, 00:08:51.612 "claimed": true, 00:08:51.612 "claim_type": "exclusive_write", 00:08:51.612 "zoned": false, 00:08:51.612 "supported_io_types": { 00:08:51.612 "read": true, 00:08:51.612 "write": true, 00:08:51.612 "unmap": true, 00:08:51.612 "flush": true, 00:08:51.612 "reset": true, 00:08:51.612 "nvme_admin": false, 00:08:51.612 "nvme_io": false, 00:08:51.612 "nvme_io_md": false, 00:08:51.612 "write_zeroes": true, 00:08:51.612 "zcopy": true, 00:08:51.612 "get_zone_info": false, 00:08:51.612 "zone_management": false, 00:08:51.612 "zone_append": false, 00:08:51.612 "compare": false, 00:08:51.612 "compare_and_write": false, 00:08:51.612 "abort": true, 00:08:51.612 "seek_hole": false, 00:08:51.612 "seek_data": false, 00:08:51.612 "copy": true, 00:08:51.612 "nvme_iov_md": false 00:08:51.612 }, 00:08:51.612 "memory_domains": [ 00:08:51.612 { 00:08:51.612 "dma_device_id": "system", 00:08:51.612 "dma_device_type": 1 00:08:51.612 }, 00:08:51.612 { 00:08:51.612 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:51.612 "dma_device_type": 2 00:08:51.612 } 00:08:51.612 ], 00:08:51.612 "driver_specific": {} 00:08:51.612 } 00:08:51.612 ] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.612 "name": "Existed_Raid", 00:08:51.612 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:51.612 "strip_size_kb": 64, 00:08:51.612 "state": "configuring", 00:08:51.612 "raid_level": "raid0", 00:08:51.612 "superblock": false, 00:08:51.612 "num_base_bdevs": 3, 00:08:51.612 "num_base_bdevs_discovered": 2, 00:08:51.612 "num_base_bdevs_operational": 3, 00:08:51.612 "base_bdevs_list": [ 00:08:51.612 { 00:08:51.612 "name": "BaseBdev1", 00:08:51.612 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:51.612 "is_configured": true, 00:08:51.612 "data_offset": 0, 00:08:51.612 "data_size": 65536 00:08:51.612 }, 00:08:51.612 { 00:08:51.612 "name": null, 00:08:51.612 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:51.612 "is_configured": false, 00:08:51.612 "data_offset": 0, 00:08:51.612 "data_size": 65536 00:08:51.612 }, 00:08:51.612 { 00:08:51.612 "name": "BaseBdev3", 00:08:51.612 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:51.612 "is_configured": true, 00:08:51.612 "data_offset": 0, 00:08:51.612 "data_size": 65536 00:08:51.612 } 00:08:51.612 ] 00:08:51.612 }' 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.612 05:46:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.880 [2024-12-12 05:46:59.394555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.880 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.140 "name": "Existed_Raid", 00:08:52.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.140 "strip_size_kb": 64, 00:08:52.140 "state": "configuring", 00:08:52.140 "raid_level": "raid0", 00:08:52.140 "superblock": false, 00:08:52.140 "num_base_bdevs": 3, 00:08:52.140 "num_base_bdevs_discovered": 1, 00:08:52.140 "num_base_bdevs_operational": 3, 00:08:52.140 "base_bdevs_list": [ 00:08:52.140 { 00:08:52.140 "name": "BaseBdev1", 00:08:52.140 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:52.140 "is_configured": true, 00:08:52.140 "data_offset": 0, 00:08:52.140 "data_size": 65536 00:08:52.140 }, 00:08:52.140 { 00:08:52.140 "name": null, 00:08:52.140 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:52.140 "is_configured": false, 00:08:52.140 "data_offset": 0, 00:08:52.140 "data_size": 65536 00:08:52.140 }, 00:08:52.140 { 00:08:52.140 "name": null, 00:08:52.140 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:52.140 "is_configured": false, 00:08:52.140 "data_offset": 0, 00:08:52.140 "data_size": 65536 00:08:52.140 } 00:08:52.140 ] 00:08:52.140 }' 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.140 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.399 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.399 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.399 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.399 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 [2024-12-12 05:46:59.857785] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.400 "name": "Existed_Raid", 00:08:52.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:52.400 "strip_size_kb": 64, 00:08:52.400 "state": "configuring", 00:08:52.400 "raid_level": "raid0", 00:08:52.400 "superblock": false, 00:08:52.400 "num_base_bdevs": 3, 00:08:52.400 "num_base_bdevs_discovered": 2, 00:08:52.400 "num_base_bdevs_operational": 3, 00:08:52.400 "base_bdevs_list": [ 00:08:52.400 { 00:08:52.400 "name": "BaseBdev1", 00:08:52.400 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:52.400 "is_configured": true, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 65536 00:08:52.400 }, 00:08:52.400 { 00:08:52.400 "name": null, 00:08:52.400 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:52.400 "is_configured": false, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 65536 00:08:52.400 }, 00:08:52.400 { 00:08:52.400 "name": "BaseBdev3", 00:08:52.400 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:52.400 "is_configured": true, 00:08:52.400 "data_offset": 0, 00:08:52.400 "data_size": 65536 00:08:52.400 } 00:08:52.400 ] 00:08:52.400 }' 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.400 05:46:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.968 [2024-12-12 05:47:00.384966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.968 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.226 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.227 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.227 "name": "Existed_Raid", 00:08:53.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.227 "strip_size_kb": 64, 00:08:53.227 "state": "configuring", 00:08:53.227 "raid_level": "raid0", 00:08:53.227 "superblock": false, 00:08:53.227 "num_base_bdevs": 3, 00:08:53.227 "num_base_bdevs_discovered": 1, 00:08:53.227 "num_base_bdevs_operational": 3, 00:08:53.227 "base_bdevs_list": [ 00:08:53.227 { 00:08:53.227 "name": null, 00:08:53.227 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:53.227 "is_configured": false, 00:08:53.227 "data_offset": 0, 00:08:53.227 "data_size": 65536 00:08:53.227 }, 00:08:53.227 { 00:08:53.227 "name": null, 00:08:53.227 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:53.227 "is_configured": false, 00:08:53.227 "data_offset": 0, 00:08:53.227 "data_size": 65536 00:08:53.227 }, 00:08:53.227 { 00:08:53.227 "name": "BaseBdev3", 00:08:53.227 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:53.227 "is_configured": true, 00:08:53.227 "data_offset": 0, 00:08:53.227 "data_size": 65536 00:08:53.227 } 00:08:53.227 ] 00:08:53.227 }' 00:08:53.227 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.227 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.486 [2024-12-12 05:47:00.933313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.486 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.487 "name": "Existed_Raid", 00:08:53.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.487 "strip_size_kb": 64, 00:08:53.487 "state": "configuring", 00:08:53.487 "raid_level": "raid0", 00:08:53.487 "superblock": false, 00:08:53.487 "num_base_bdevs": 3, 00:08:53.487 "num_base_bdevs_discovered": 2, 00:08:53.487 "num_base_bdevs_operational": 3, 00:08:53.487 "base_bdevs_list": [ 00:08:53.487 { 00:08:53.487 "name": null, 00:08:53.487 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:53.487 "is_configured": false, 00:08:53.487 "data_offset": 0, 00:08:53.487 "data_size": 65536 00:08:53.487 }, 00:08:53.487 { 00:08:53.487 "name": "BaseBdev2", 00:08:53.487 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:53.487 "is_configured": true, 00:08:53.487 "data_offset": 0, 00:08:53.487 "data_size": 65536 00:08:53.487 }, 00:08:53.487 { 00:08:53.487 "name": "BaseBdev3", 00:08:53.487 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:53.487 "is_configured": true, 00:08:53.487 "data_offset": 0, 00:08:53.487 "data_size": 65536 00:08:53.487 } 00:08:53.487 ] 00:08:53.487 }' 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.487 05:47:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 [2024-12-12 05:47:01.480213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:08:54.057 [2024-12-12 05:47:01.480263] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.057 [2024-12-12 05:47:01.480273] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:54.057 [2024-12-12 05:47:01.480538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:54.057 [2024-12-12 05:47:01.480727] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.057 [2024-12-12 05:47:01.480737] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:08:54.057 [2024-12-12 05:47:01.480989] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.057 NewBaseBdev 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.057 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.057 [ 00:08:54.057 { 00:08:54.057 "name": "NewBaseBdev", 00:08:54.057 "aliases": [ 00:08:54.057 "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc" 00:08:54.057 ], 00:08:54.057 "product_name": "Malloc disk", 00:08:54.057 "block_size": 512, 00:08:54.057 "num_blocks": 65536, 00:08:54.057 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:54.057 "assigned_rate_limits": { 00:08:54.057 "rw_ios_per_sec": 0, 00:08:54.057 "rw_mbytes_per_sec": 0, 00:08:54.057 "r_mbytes_per_sec": 0, 00:08:54.057 "w_mbytes_per_sec": 0 00:08:54.057 }, 00:08:54.057 "claimed": true, 00:08:54.057 "claim_type": "exclusive_write", 00:08:54.057 "zoned": false, 00:08:54.057 "supported_io_types": { 00:08:54.057 "read": true, 00:08:54.057 "write": true, 00:08:54.057 "unmap": true, 00:08:54.057 "flush": true, 00:08:54.057 "reset": true, 00:08:54.057 "nvme_admin": false, 00:08:54.057 "nvme_io": false, 00:08:54.058 "nvme_io_md": false, 00:08:54.058 "write_zeroes": true, 00:08:54.058 "zcopy": true, 00:08:54.058 "get_zone_info": false, 00:08:54.058 "zone_management": false, 00:08:54.058 "zone_append": false, 00:08:54.058 "compare": false, 00:08:54.058 "compare_and_write": false, 00:08:54.058 "abort": true, 00:08:54.058 "seek_hole": false, 00:08:54.058 "seek_data": false, 00:08:54.058 "copy": true, 00:08:54.058 "nvme_iov_md": false 00:08:54.058 }, 00:08:54.058 "memory_domains": [ 00:08:54.058 { 00:08:54.058 "dma_device_id": "system", 00:08:54.058 "dma_device_type": 1 00:08:54.058 }, 00:08:54.058 { 00:08:54.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.058 "dma_device_type": 2 00:08:54.058 } 00:08:54.058 ], 00:08:54.058 "driver_specific": {} 00:08:54.058 } 00:08:54.058 ] 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.058 "name": "Existed_Raid", 00:08:54.058 "uuid": "9d88f8bf-fdac-464e-882e-6de1c935aada", 00:08:54.058 "strip_size_kb": 64, 00:08:54.058 "state": "online", 00:08:54.058 "raid_level": "raid0", 00:08:54.058 "superblock": false, 00:08:54.058 "num_base_bdevs": 3, 00:08:54.058 "num_base_bdevs_discovered": 3, 00:08:54.058 "num_base_bdevs_operational": 3, 00:08:54.058 "base_bdevs_list": [ 00:08:54.058 { 00:08:54.058 "name": "NewBaseBdev", 00:08:54.058 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:54.058 "is_configured": true, 00:08:54.058 "data_offset": 0, 00:08:54.058 "data_size": 65536 00:08:54.058 }, 00:08:54.058 { 00:08:54.058 "name": "BaseBdev2", 00:08:54.058 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:54.058 "is_configured": true, 00:08:54.058 "data_offset": 0, 00:08:54.058 "data_size": 65536 00:08:54.058 }, 00:08:54.058 { 00:08:54.058 "name": "BaseBdev3", 00:08:54.058 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:54.058 "is_configured": true, 00:08:54.058 "data_offset": 0, 00:08:54.058 "data_size": 65536 00:08:54.058 } 00:08:54.058 ] 00:08:54.058 }' 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.058 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.627 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.628 05:47:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.628 [2024-12-12 05:47:01.983683] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:54.628 "name": "Existed_Raid", 00:08:54.628 "aliases": [ 00:08:54.628 "9d88f8bf-fdac-464e-882e-6de1c935aada" 00:08:54.628 ], 00:08:54.628 "product_name": "Raid Volume", 00:08:54.628 "block_size": 512, 00:08:54.628 "num_blocks": 196608, 00:08:54.628 "uuid": "9d88f8bf-fdac-464e-882e-6de1c935aada", 00:08:54.628 "assigned_rate_limits": { 00:08:54.628 "rw_ios_per_sec": 0, 00:08:54.628 "rw_mbytes_per_sec": 0, 00:08:54.628 "r_mbytes_per_sec": 0, 00:08:54.628 "w_mbytes_per_sec": 0 00:08:54.628 }, 00:08:54.628 "claimed": false, 00:08:54.628 "zoned": false, 00:08:54.628 "supported_io_types": { 00:08:54.628 "read": true, 00:08:54.628 "write": true, 00:08:54.628 "unmap": true, 00:08:54.628 "flush": true, 00:08:54.628 "reset": true, 00:08:54.628 "nvme_admin": false, 00:08:54.628 "nvme_io": false, 00:08:54.628 "nvme_io_md": false, 00:08:54.628 "write_zeroes": true, 00:08:54.628 "zcopy": false, 00:08:54.628 "get_zone_info": false, 00:08:54.628 "zone_management": false, 00:08:54.628 "zone_append": false, 00:08:54.628 "compare": false, 00:08:54.628 "compare_and_write": false, 00:08:54.628 "abort": false, 00:08:54.628 "seek_hole": false, 00:08:54.628 "seek_data": false, 00:08:54.628 "copy": false, 00:08:54.628 "nvme_iov_md": false 00:08:54.628 }, 00:08:54.628 "memory_domains": [ 00:08:54.628 { 00:08:54.628 "dma_device_id": "system", 00:08:54.628 "dma_device_type": 1 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.628 "dma_device_type": 2 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "dma_device_id": "system", 00:08:54.628 "dma_device_type": 1 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.628 "dma_device_type": 2 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "dma_device_id": "system", 00:08:54.628 "dma_device_type": 1 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.628 "dma_device_type": 2 00:08:54.628 } 00:08:54.628 ], 00:08:54.628 "driver_specific": { 00:08:54.628 "raid": { 00:08:54.628 "uuid": "9d88f8bf-fdac-464e-882e-6de1c935aada", 00:08:54.628 "strip_size_kb": 64, 00:08:54.628 "state": "online", 00:08:54.628 "raid_level": "raid0", 00:08:54.628 "superblock": false, 00:08:54.628 "num_base_bdevs": 3, 00:08:54.628 "num_base_bdevs_discovered": 3, 00:08:54.628 "num_base_bdevs_operational": 3, 00:08:54.628 "base_bdevs_list": [ 00:08:54.628 { 00:08:54.628 "name": "NewBaseBdev", 00:08:54.628 "uuid": "e4a629d7-3e78-4ca3-bab4-ad8d8e463cdc", 00:08:54.628 "is_configured": true, 00:08:54.628 "data_offset": 0, 00:08:54.628 "data_size": 65536 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "name": "BaseBdev2", 00:08:54.628 "uuid": "aba9101a-449a-4504-ac4c-ee2cbda5d31c", 00:08:54.628 "is_configured": true, 00:08:54.628 "data_offset": 0, 00:08:54.628 "data_size": 65536 00:08:54.628 }, 00:08:54.628 { 00:08:54.628 "name": "BaseBdev3", 00:08:54.628 "uuid": "979c71f5-1364-4973-bc87-a1eb3de3cd65", 00:08:54.628 "is_configured": true, 00:08:54.628 "data_offset": 0, 00:08:54.628 "data_size": 65536 00:08:54.628 } 00:08:54.628 ] 00:08:54.628 } 00:08:54.628 } 00:08:54.628 }' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:08:54.628 BaseBdev2 00:08:54.628 BaseBdev3' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.628 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.888 [2024-12-12 05:47:02.242930] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:54.888 [2024-12-12 05:47:02.242957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.888 [2024-12-12 05:47:02.243034] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.888 [2024-12-12 05:47:02.243086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.888 [2024-12-12 05:47:02.243096] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 64786 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 64786 ']' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 64786 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64786 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64786' 00:08:54.888 killing process with pid 64786 00:08:54.888 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 64786 00:08:54.888 [2024-12-12 05:47:02.288449] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.889 05:47:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 64786 00:08:55.148 [2024-12-12 05:47:02.567432] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:56.528 00:08:56.528 real 0m10.394s 00:08:56.528 user 0m16.657s 00:08:56.528 sys 0m1.739s 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 ************************************ 00:08:56.528 END TEST raid_state_function_test 00:08:56.528 ************************************ 00:08:56.528 05:47:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:08:56.528 05:47:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.528 05:47:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.528 05:47:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.528 ************************************ 00:08:56.528 START TEST raid_state_function_test_sb 00:08:56.528 ************************************ 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:56.528 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=65407 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65407' 00:08:56.529 Process raid pid: 65407 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 65407 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 65407 ']' 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.529 05:47:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:56.529 [2024-12-12 05:47:03.782055] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:08:56.529 [2024-12-12 05:47:03.782261] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.529 [2024-12-12 05:47:03.954872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.788 [2024-12-12 05:47:04.069362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.788 [2024-12-12 05:47:04.254232] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.788 [2024-12-12 05:47:04.254355] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.358 [2024-12-12 05:47:04.602109] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.358 [2024-12-12 05:47:04.602224] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.358 [2024-12-12 05:47:04.602239] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.358 [2024-12-12 05:47:04.602270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.358 [2024-12-12 05:47:04.602277] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.358 [2024-12-12 05:47:04.602285] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.358 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.358 "name": "Existed_Raid", 00:08:57.358 "uuid": "5a76c357-8b06-4bbe-a49a-b966a861aa8d", 00:08:57.358 "strip_size_kb": 64, 00:08:57.358 "state": "configuring", 00:08:57.358 "raid_level": "raid0", 00:08:57.358 "superblock": true, 00:08:57.358 "num_base_bdevs": 3, 00:08:57.358 "num_base_bdevs_discovered": 0, 00:08:57.358 "num_base_bdevs_operational": 3, 00:08:57.359 "base_bdevs_list": [ 00:08:57.359 { 00:08:57.359 "name": "BaseBdev1", 00:08:57.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.359 "is_configured": false, 00:08:57.359 "data_offset": 0, 00:08:57.359 "data_size": 0 00:08:57.359 }, 00:08:57.359 { 00:08:57.359 "name": "BaseBdev2", 00:08:57.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.359 "is_configured": false, 00:08:57.359 "data_offset": 0, 00:08:57.359 "data_size": 0 00:08:57.359 }, 00:08:57.359 { 00:08:57.359 "name": "BaseBdev3", 00:08:57.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.359 "is_configured": false, 00:08:57.359 "data_offset": 0, 00:08:57.359 "data_size": 0 00:08:57.359 } 00:08:57.359 ] 00:08:57.359 }' 00:08:57.359 05:47:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.359 05:47:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 [2024-12-12 05:47:05.053255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.618 [2024-12-12 05:47:05.053331] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 [2024-12-12 05:47:05.061256] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.618 [2024-12-12 05:47:05.061332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.618 [2024-12-12 05:47:05.061374] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.618 [2024-12-12 05:47:05.061396] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.618 [2024-12-12 05:47:05.061414] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.618 [2024-12-12 05:47:05.061434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 [2024-12-12 05:47:05.109471] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.618 BaseBdev1 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.618 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.618 [ 00:08:57.618 { 00:08:57.618 "name": "BaseBdev1", 00:08:57.618 "aliases": [ 00:08:57.618 "20364ef1-e380-47a2-bdc2-a74b9892515e" 00:08:57.618 ], 00:08:57.618 "product_name": "Malloc disk", 00:08:57.618 "block_size": 512, 00:08:57.618 "num_blocks": 65536, 00:08:57.618 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:57.618 "assigned_rate_limits": { 00:08:57.618 "rw_ios_per_sec": 0, 00:08:57.618 "rw_mbytes_per_sec": 0, 00:08:57.618 "r_mbytes_per_sec": 0, 00:08:57.618 "w_mbytes_per_sec": 0 00:08:57.618 }, 00:08:57.618 "claimed": true, 00:08:57.618 "claim_type": "exclusive_write", 00:08:57.618 "zoned": false, 00:08:57.618 "supported_io_types": { 00:08:57.618 "read": true, 00:08:57.618 "write": true, 00:08:57.618 "unmap": true, 00:08:57.618 "flush": true, 00:08:57.618 "reset": true, 00:08:57.618 "nvme_admin": false, 00:08:57.618 "nvme_io": false, 00:08:57.877 "nvme_io_md": false, 00:08:57.877 "write_zeroes": true, 00:08:57.877 "zcopy": true, 00:08:57.877 "get_zone_info": false, 00:08:57.877 "zone_management": false, 00:08:57.877 "zone_append": false, 00:08:57.877 "compare": false, 00:08:57.877 "compare_and_write": false, 00:08:57.877 "abort": true, 00:08:57.877 "seek_hole": false, 00:08:57.877 "seek_data": false, 00:08:57.877 "copy": true, 00:08:57.877 "nvme_iov_md": false 00:08:57.877 }, 00:08:57.877 "memory_domains": [ 00:08:57.877 { 00:08:57.877 "dma_device_id": "system", 00:08:57.877 "dma_device_type": 1 00:08:57.877 }, 00:08:57.877 { 00:08:57.877 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.877 "dma_device_type": 2 00:08:57.877 } 00:08:57.877 ], 00:08:57.877 "driver_specific": {} 00:08:57.877 } 00:08:57.877 ] 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.877 "name": "Existed_Raid", 00:08:57.877 "uuid": "b11fec9c-96f5-4562-a5dd-173275a44d08", 00:08:57.877 "strip_size_kb": 64, 00:08:57.877 "state": "configuring", 00:08:57.877 "raid_level": "raid0", 00:08:57.877 "superblock": true, 00:08:57.877 "num_base_bdevs": 3, 00:08:57.877 "num_base_bdevs_discovered": 1, 00:08:57.877 "num_base_bdevs_operational": 3, 00:08:57.877 "base_bdevs_list": [ 00:08:57.877 { 00:08:57.877 "name": "BaseBdev1", 00:08:57.877 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:57.877 "is_configured": true, 00:08:57.877 "data_offset": 2048, 00:08:57.877 "data_size": 63488 00:08:57.877 }, 00:08:57.877 { 00:08:57.877 "name": "BaseBdev2", 00:08:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.877 "is_configured": false, 00:08:57.877 "data_offset": 0, 00:08:57.877 "data_size": 0 00:08:57.877 }, 00:08:57.877 { 00:08:57.877 "name": "BaseBdev3", 00:08:57.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.877 "is_configured": false, 00:08:57.877 "data_offset": 0, 00:08:57.877 "data_size": 0 00:08:57.877 } 00:08:57.877 ] 00:08:57.877 }' 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.877 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.137 [2024-12-12 05:47:05.548754] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:58.137 [2024-12-12 05:47:05.548801] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.137 [2024-12-12 05:47:05.560791] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.137 [2024-12-12 05:47:05.562576] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.137 [2024-12-12 05:47:05.562619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.137 [2024-12-12 05:47:05.562630] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.137 [2024-12-12 05:47:05.562639] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.137 "name": "Existed_Raid", 00:08:58.137 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:08:58.137 "strip_size_kb": 64, 00:08:58.137 "state": "configuring", 00:08:58.137 "raid_level": "raid0", 00:08:58.137 "superblock": true, 00:08:58.137 "num_base_bdevs": 3, 00:08:58.137 "num_base_bdevs_discovered": 1, 00:08:58.137 "num_base_bdevs_operational": 3, 00:08:58.137 "base_bdevs_list": [ 00:08:58.137 { 00:08:58.137 "name": "BaseBdev1", 00:08:58.137 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:58.137 "is_configured": true, 00:08:58.137 "data_offset": 2048, 00:08:58.137 "data_size": 63488 00:08:58.137 }, 00:08:58.137 { 00:08:58.137 "name": "BaseBdev2", 00:08:58.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.137 "is_configured": false, 00:08:58.137 "data_offset": 0, 00:08:58.137 "data_size": 0 00:08:58.137 }, 00:08:58.137 { 00:08:58.137 "name": "BaseBdev3", 00:08:58.137 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.137 "is_configured": false, 00:08:58.137 "data_offset": 0, 00:08:58.137 "data_size": 0 00:08:58.137 } 00:08:58.137 ] 00:08:58.137 }' 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.137 05:47:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.706 [2024-12-12 05:47:06.070013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:58.706 BaseBdev2 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.706 [ 00:08:58.706 { 00:08:58.706 "name": "BaseBdev2", 00:08:58.706 "aliases": [ 00:08:58.706 "e857a424-fcb4-478b-bc72-0f77e2233f83" 00:08:58.706 ], 00:08:58.706 "product_name": "Malloc disk", 00:08:58.706 "block_size": 512, 00:08:58.706 "num_blocks": 65536, 00:08:58.706 "uuid": "e857a424-fcb4-478b-bc72-0f77e2233f83", 00:08:58.706 "assigned_rate_limits": { 00:08:58.706 "rw_ios_per_sec": 0, 00:08:58.706 "rw_mbytes_per_sec": 0, 00:08:58.706 "r_mbytes_per_sec": 0, 00:08:58.706 "w_mbytes_per_sec": 0 00:08:58.706 }, 00:08:58.706 "claimed": true, 00:08:58.706 "claim_type": "exclusive_write", 00:08:58.706 "zoned": false, 00:08:58.706 "supported_io_types": { 00:08:58.706 "read": true, 00:08:58.706 "write": true, 00:08:58.706 "unmap": true, 00:08:58.706 "flush": true, 00:08:58.706 "reset": true, 00:08:58.706 "nvme_admin": false, 00:08:58.706 "nvme_io": false, 00:08:58.706 "nvme_io_md": false, 00:08:58.706 "write_zeroes": true, 00:08:58.706 "zcopy": true, 00:08:58.706 "get_zone_info": false, 00:08:58.706 "zone_management": false, 00:08:58.706 "zone_append": false, 00:08:58.706 "compare": false, 00:08:58.706 "compare_and_write": false, 00:08:58.706 "abort": true, 00:08:58.706 "seek_hole": false, 00:08:58.706 "seek_data": false, 00:08:58.706 "copy": true, 00:08:58.706 "nvme_iov_md": false 00:08:58.706 }, 00:08:58.706 "memory_domains": [ 00:08:58.706 { 00:08:58.706 "dma_device_id": "system", 00:08:58.706 "dma_device_type": 1 00:08:58.706 }, 00:08:58.706 { 00:08:58.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.706 "dma_device_type": 2 00:08:58.706 } 00:08:58.706 ], 00:08:58.706 "driver_specific": {} 00:08:58.706 } 00:08:58.706 ] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.706 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.706 "name": "Existed_Raid", 00:08:58.706 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:08:58.706 "strip_size_kb": 64, 00:08:58.706 "state": "configuring", 00:08:58.706 "raid_level": "raid0", 00:08:58.706 "superblock": true, 00:08:58.706 "num_base_bdevs": 3, 00:08:58.706 "num_base_bdevs_discovered": 2, 00:08:58.706 "num_base_bdevs_operational": 3, 00:08:58.706 "base_bdevs_list": [ 00:08:58.706 { 00:08:58.706 "name": "BaseBdev1", 00:08:58.706 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:58.706 "is_configured": true, 00:08:58.706 "data_offset": 2048, 00:08:58.706 "data_size": 63488 00:08:58.706 }, 00:08:58.706 { 00:08:58.706 "name": "BaseBdev2", 00:08:58.706 "uuid": "e857a424-fcb4-478b-bc72-0f77e2233f83", 00:08:58.706 "is_configured": true, 00:08:58.706 "data_offset": 2048, 00:08:58.706 "data_size": 63488 00:08:58.706 }, 00:08:58.706 { 00:08:58.706 "name": "BaseBdev3", 00:08:58.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.706 "is_configured": false, 00:08:58.706 "data_offset": 0, 00:08:58.707 "data_size": 0 00:08:58.707 } 00:08:58.707 ] 00:08:58.707 }' 00:08:58.707 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.707 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 BaseBdev3 00:08:59.276 [2024-12-12 05:47:06.581877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:08:59.276 [2024-12-12 05:47:06.582147] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:59.276 [2024-12-12 05:47:06.582167] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:08:59.276 [2024-12-12 05:47:06.582434] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:59.276 [2024-12-12 05:47:06.582633] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:59.276 [2024-12-12 05:47:06.582644] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:59.276 [2024-12-12 05:47:06.582787] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 [ 00:08:59.276 { 00:08:59.276 "name": "BaseBdev3", 00:08:59.276 "aliases": [ 00:08:59.276 "15c21ad1-fa56-4aa8-954f-8913185469af" 00:08:59.276 ], 00:08:59.276 "product_name": "Malloc disk", 00:08:59.276 "block_size": 512, 00:08:59.276 "num_blocks": 65536, 00:08:59.276 "uuid": "15c21ad1-fa56-4aa8-954f-8913185469af", 00:08:59.276 "assigned_rate_limits": { 00:08:59.276 "rw_ios_per_sec": 0, 00:08:59.276 "rw_mbytes_per_sec": 0, 00:08:59.276 "r_mbytes_per_sec": 0, 00:08:59.276 "w_mbytes_per_sec": 0 00:08:59.276 }, 00:08:59.276 "claimed": true, 00:08:59.276 "claim_type": "exclusive_write", 00:08:59.276 "zoned": false, 00:08:59.276 "supported_io_types": { 00:08:59.276 "read": true, 00:08:59.276 "write": true, 00:08:59.276 "unmap": true, 00:08:59.276 "flush": true, 00:08:59.276 "reset": true, 00:08:59.276 "nvme_admin": false, 00:08:59.276 "nvme_io": false, 00:08:59.276 "nvme_io_md": false, 00:08:59.276 "write_zeroes": true, 00:08:59.276 "zcopy": true, 00:08:59.276 "get_zone_info": false, 00:08:59.276 "zone_management": false, 00:08:59.276 "zone_append": false, 00:08:59.276 "compare": false, 00:08:59.276 "compare_and_write": false, 00:08:59.276 "abort": true, 00:08:59.276 "seek_hole": false, 00:08:59.276 "seek_data": false, 00:08:59.276 "copy": true, 00:08:59.276 "nvme_iov_md": false 00:08:59.276 }, 00:08:59.276 "memory_domains": [ 00:08:59.276 { 00:08:59.276 "dma_device_id": "system", 00:08:59.276 "dma_device_type": 1 00:08:59.276 }, 00:08:59.276 { 00:08:59.276 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.276 "dma_device_type": 2 00:08:59.276 } 00:08:59.276 ], 00:08:59.276 "driver_specific": {} 00:08:59.276 } 00:08:59.276 ] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.276 "name": "Existed_Raid", 00:08:59.276 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:08:59.276 "strip_size_kb": 64, 00:08:59.276 "state": "online", 00:08:59.276 "raid_level": "raid0", 00:08:59.276 "superblock": true, 00:08:59.276 "num_base_bdevs": 3, 00:08:59.276 "num_base_bdevs_discovered": 3, 00:08:59.276 "num_base_bdevs_operational": 3, 00:08:59.276 "base_bdevs_list": [ 00:08:59.276 { 00:08:59.276 "name": "BaseBdev1", 00:08:59.276 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:59.276 "is_configured": true, 00:08:59.276 "data_offset": 2048, 00:08:59.276 "data_size": 63488 00:08:59.276 }, 00:08:59.276 { 00:08:59.276 "name": "BaseBdev2", 00:08:59.276 "uuid": "e857a424-fcb4-478b-bc72-0f77e2233f83", 00:08:59.276 "is_configured": true, 00:08:59.276 "data_offset": 2048, 00:08:59.276 "data_size": 63488 00:08:59.276 }, 00:08:59.276 { 00:08:59.276 "name": "BaseBdev3", 00:08:59.276 "uuid": "15c21ad1-fa56-4aa8-954f-8913185469af", 00:08:59.276 "is_configured": true, 00:08:59.276 "data_offset": 2048, 00:08:59.276 "data_size": 63488 00:08:59.276 } 00:08:59.276 ] 00:08:59.276 }' 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.276 05:47:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.536 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.536 [2024-12-12 05:47:07.049397] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:59.796 "name": "Existed_Raid", 00:08:59.796 "aliases": [ 00:08:59.796 "4c80bfd7-7567-4d01-a0b1-e79207f0f225" 00:08:59.796 ], 00:08:59.796 "product_name": "Raid Volume", 00:08:59.796 "block_size": 512, 00:08:59.796 "num_blocks": 190464, 00:08:59.796 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:08:59.796 "assigned_rate_limits": { 00:08:59.796 "rw_ios_per_sec": 0, 00:08:59.796 "rw_mbytes_per_sec": 0, 00:08:59.796 "r_mbytes_per_sec": 0, 00:08:59.796 "w_mbytes_per_sec": 0 00:08:59.796 }, 00:08:59.796 "claimed": false, 00:08:59.796 "zoned": false, 00:08:59.796 "supported_io_types": { 00:08:59.796 "read": true, 00:08:59.796 "write": true, 00:08:59.796 "unmap": true, 00:08:59.796 "flush": true, 00:08:59.796 "reset": true, 00:08:59.796 "nvme_admin": false, 00:08:59.796 "nvme_io": false, 00:08:59.796 "nvme_io_md": false, 00:08:59.796 "write_zeroes": true, 00:08:59.796 "zcopy": false, 00:08:59.796 "get_zone_info": false, 00:08:59.796 "zone_management": false, 00:08:59.796 "zone_append": false, 00:08:59.796 "compare": false, 00:08:59.796 "compare_and_write": false, 00:08:59.796 "abort": false, 00:08:59.796 "seek_hole": false, 00:08:59.796 "seek_data": false, 00:08:59.796 "copy": false, 00:08:59.796 "nvme_iov_md": false 00:08:59.796 }, 00:08:59.796 "memory_domains": [ 00:08:59.796 { 00:08:59.796 "dma_device_id": "system", 00:08:59.796 "dma_device_type": 1 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.796 "dma_device_type": 2 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "dma_device_id": "system", 00:08:59.796 "dma_device_type": 1 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.796 "dma_device_type": 2 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "dma_device_id": "system", 00:08:59.796 "dma_device_type": 1 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:59.796 "dma_device_type": 2 00:08:59.796 } 00:08:59.796 ], 00:08:59.796 "driver_specific": { 00:08:59.796 "raid": { 00:08:59.796 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:08:59.796 "strip_size_kb": 64, 00:08:59.796 "state": "online", 00:08:59.796 "raid_level": "raid0", 00:08:59.796 "superblock": true, 00:08:59.796 "num_base_bdevs": 3, 00:08:59.796 "num_base_bdevs_discovered": 3, 00:08:59.796 "num_base_bdevs_operational": 3, 00:08:59.796 "base_bdevs_list": [ 00:08:59.796 { 00:08:59.796 "name": "BaseBdev1", 00:08:59.796 "uuid": "20364ef1-e380-47a2-bdc2-a74b9892515e", 00:08:59.796 "is_configured": true, 00:08:59.796 "data_offset": 2048, 00:08:59.796 "data_size": 63488 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "name": "BaseBdev2", 00:08:59.796 "uuid": "e857a424-fcb4-478b-bc72-0f77e2233f83", 00:08:59.796 "is_configured": true, 00:08:59.796 "data_offset": 2048, 00:08:59.796 "data_size": 63488 00:08:59.796 }, 00:08:59.796 { 00:08:59.796 "name": "BaseBdev3", 00:08:59.796 "uuid": "15c21ad1-fa56-4aa8-954f-8913185469af", 00:08:59.796 "is_configured": true, 00:08:59.796 "data_offset": 2048, 00:08:59.796 "data_size": 63488 00:08:59.796 } 00:08:59.796 ] 00:08:59.796 } 00:08:59.796 } 00:08:59.796 }' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:59.796 BaseBdev2 00:08:59.796 BaseBdev3' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.796 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.797 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:59.797 [2024-12-12 05:47:07.304718] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:59.797 [2024-12-12 05:47:07.304782] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.797 [2024-12-12 05:47:07.304872] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.058 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.058 "name": "Existed_Raid", 00:09:00.058 "uuid": "4c80bfd7-7567-4d01-a0b1-e79207f0f225", 00:09:00.058 "strip_size_kb": 64, 00:09:00.058 "state": "offline", 00:09:00.058 "raid_level": "raid0", 00:09:00.058 "superblock": true, 00:09:00.058 "num_base_bdevs": 3, 00:09:00.058 "num_base_bdevs_discovered": 2, 00:09:00.058 "num_base_bdevs_operational": 2, 00:09:00.058 "base_bdevs_list": [ 00:09:00.058 { 00:09:00.058 "name": null, 00:09:00.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.058 "is_configured": false, 00:09:00.058 "data_offset": 0, 00:09:00.058 "data_size": 63488 00:09:00.059 }, 00:09:00.059 { 00:09:00.059 "name": "BaseBdev2", 00:09:00.059 "uuid": "e857a424-fcb4-478b-bc72-0f77e2233f83", 00:09:00.059 "is_configured": true, 00:09:00.059 "data_offset": 2048, 00:09:00.059 "data_size": 63488 00:09:00.059 }, 00:09:00.059 { 00:09:00.059 "name": "BaseBdev3", 00:09:00.059 "uuid": "15c21ad1-fa56-4aa8-954f-8913185469af", 00:09:00.059 "is_configured": true, 00:09:00.059 "data_offset": 2048, 00:09:00.059 "data_size": 63488 00:09:00.059 } 00:09:00.059 ] 00:09:00.059 }' 00:09:00.059 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.059 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 [2024-12-12 05:47:07.892359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:00.628 05:47:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.628 [2024-12-12 05:47:08.042756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:00.628 [2024-12-12 05:47:08.042806] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.628 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:00.629 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:00.629 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.629 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:00.629 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.629 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.888 BaseBdev2 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:00.888 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 [ 00:09:00.889 { 00:09:00.889 "name": "BaseBdev2", 00:09:00.889 "aliases": [ 00:09:00.889 "39d154bb-b247-4cf2-9472-bf12192c2c61" 00:09:00.889 ], 00:09:00.889 "product_name": "Malloc disk", 00:09:00.889 "block_size": 512, 00:09:00.889 "num_blocks": 65536, 00:09:00.889 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:00.889 "assigned_rate_limits": { 00:09:00.889 "rw_ios_per_sec": 0, 00:09:00.889 "rw_mbytes_per_sec": 0, 00:09:00.889 "r_mbytes_per_sec": 0, 00:09:00.889 "w_mbytes_per_sec": 0 00:09:00.889 }, 00:09:00.889 "claimed": false, 00:09:00.889 "zoned": false, 00:09:00.889 "supported_io_types": { 00:09:00.889 "read": true, 00:09:00.889 "write": true, 00:09:00.889 "unmap": true, 00:09:00.889 "flush": true, 00:09:00.889 "reset": true, 00:09:00.889 "nvme_admin": false, 00:09:00.889 "nvme_io": false, 00:09:00.889 "nvme_io_md": false, 00:09:00.889 "write_zeroes": true, 00:09:00.889 "zcopy": true, 00:09:00.889 "get_zone_info": false, 00:09:00.889 "zone_management": false, 00:09:00.889 "zone_append": false, 00:09:00.889 "compare": false, 00:09:00.889 "compare_and_write": false, 00:09:00.889 "abort": true, 00:09:00.889 "seek_hole": false, 00:09:00.889 "seek_data": false, 00:09:00.889 "copy": true, 00:09:00.889 "nvme_iov_md": false 00:09:00.889 }, 00:09:00.889 "memory_domains": [ 00:09:00.889 { 00:09:00.889 "dma_device_id": "system", 00:09:00.889 "dma_device_type": 1 00:09:00.889 }, 00:09:00.889 { 00:09:00.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.889 "dma_device_type": 2 00:09:00.889 } 00:09:00.889 ], 00:09:00.889 "driver_specific": {} 00:09:00.889 } 00:09:00.889 ] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 BaseBdev3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 [ 00:09:00.889 { 00:09:00.889 "name": "BaseBdev3", 00:09:00.889 "aliases": [ 00:09:00.889 "102425f1-2536-4b23-8d18-1abdd46e6ee3" 00:09:00.889 ], 00:09:00.889 "product_name": "Malloc disk", 00:09:00.889 "block_size": 512, 00:09:00.889 "num_blocks": 65536, 00:09:00.889 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:00.889 "assigned_rate_limits": { 00:09:00.889 "rw_ios_per_sec": 0, 00:09:00.889 "rw_mbytes_per_sec": 0, 00:09:00.889 "r_mbytes_per_sec": 0, 00:09:00.889 "w_mbytes_per_sec": 0 00:09:00.889 }, 00:09:00.889 "claimed": false, 00:09:00.889 "zoned": false, 00:09:00.889 "supported_io_types": { 00:09:00.889 "read": true, 00:09:00.889 "write": true, 00:09:00.889 "unmap": true, 00:09:00.889 "flush": true, 00:09:00.889 "reset": true, 00:09:00.889 "nvme_admin": false, 00:09:00.889 "nvme_io": false, 00:09:00.889 "nvme_io_md": false, 00:09:00.889 "write_zeroes": true, 00:09:00.889 "zcopy": true, 00:09:00.889 "get_zone_info": false, 00:09:00.889 "zone_management": false, 00:09:00.889 "zone_append": false, 00:09:00.889 "compare": false, 00:09:00.889 "compare_and_write": false, 00:09:00.889 "abort": true, 00:09:00.889 "seek_hole": false, 00:09:00.889 "seek_data": false, 00:09:00.889 "copy": true, 00:09:00.889 "nvme_iov_md": false 00:09:00.889 }, 00:09:00.889 "memory_domains": [ 00:09:00.889 { 00:09:00.889 "dma_device_id": "system", 00:09:00.889 "dma_device_type": 1 00:09:00.889 }, 00:09:00.889 { 00:09:00.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.889 "dma_device_type": 2 00:09:00.889 } 00:09:00.889 ], 00:09:00.889 "driver_specific": {} 00:09:00.889 } 00:09:00.889 ] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 [2024-12-12 05:47:08.341020] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:00.889 [2024-12-12 05:47:08.341101] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:00.889 [2024-12-12 05:47:08.341157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:00.889 [2024-12-12 05:47:08.343021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:00.889 "name": "Existed_Raid", 00:09:00.889 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:00.889 "strip_size_kb": 64, 00:09:00.889 "state": "configuring", 00:09:00.889 "raid_level": "raid0", 00:09:00.889 "superblock": true, 00:09:00.889 "num_base_bdevs": 3, 00:09:00.889 "num_base_bdevs_discovered": 2, 00:09:00.889 "num_base_bdevs_operational": 3, 00:09:00.889 "base_bdevs_list": [ 00:09:00.889 { 00:09:00.889 "name": "BaseBdev1", 00:09:00.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.889 "is_configured": false, 00:09:00.889 "data_offset": 0, 00:09:00.889 "data_size": 0 00:09:00.889 }, 00:09:00.889 { 00:09:00.889 "name": "BaseBdev2", 00:09:00.889 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:00.889 "is_configured": true, 00:09:00.889 "data_offset": 2048, 00:09:00.889 "data_size": 63488 00:09:00.889 }, 00:09:00.889 { 00:09:00.889 "name": "BaseBdev3", 00:09:00.889 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:00.889 "is_configured": true, 00:09:00.889 "data_offset": 2048, 00:09:00.889 "data_size": 63488 00:09:00.889 } 00:09:00.889 ] 00:09:00.889 }' 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:00.889 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.458 [2024-12-12 05:47:08.716359] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.458 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.458 "name": "Existed_Raid", 00:09:01.458 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:01.458 "strip_size_kb": 64, 00:09:01.458 "state": "configuring", 00:09:01.458 "raid_level": "raid0", 00:09:01.458 "superblock": true, 00:09:01.458 "num_base_bdevs": 3, 00:09:01.458 "num_base_bdevs_discovered": 1, 00:09:01.458 "num_base_bdevs_operational": 3, 00:09:01.458 "base_bdevs_list": [ 00:09:01.458 { 00:09:01.458 "name": "BaseBdev1", 00:09:01.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.458 "is_configured": false, 00:09:01.458 "data_offset": 0, 00:09:01.458 "data_size": 0 00:09:01.459 }, 00:09:01.459 { 00:09:01.459 "name": null, 00:09:01.459 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:01.459 "is_configured": false, 00:09:01.459 "data_offset": 0, 00:09:01.459 "data_size": 63488 00:09:01.459 }, 00:09:01.459 { 00:09:01.459 "name": "BaseBdev3", 00:09:01.459 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:01.459 "is_configured": true, 00:09:01.459 "data_offset": 2048, 00:09:01.459 "data_size": 63488 00:09:01.459 } 00:09:01.459 ] 00:09:01.459 }' 00:09:01.459 05:47:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.459 05:47:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.718 [2024-12-12 05:47:09.215167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:01.718 BaseBdev1 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.718 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.978 [ 00:09:01.978 { 00:09:01.978 "name": "BaseBdev1", 00:09:01.978 "aliases": [ 00:09:01.978 "296b355e-df5c-4f9e-b361-bc93a8c2d378" 00:09:01.978 ], 00:09:01.978 "product_name": "Malloc disk", 00:09:01.978 "block_size": 512, 00:09:01.978 "num_blocks": 65536, 00:09:01.978 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:01.978 "assigned_rate_limits": { 00:09:01.978 "rw_ios_per_sec": 0, 00:09:01.978 "rw_mbytes_per_sec": 0, 00:09:01.978 "r_mbytes_per_sec": 0, 00:09:01.978 "w_mbytes_per_sec": 0 00:09:01.978 }, 00:09:01.978 "claimed": true, 00:09:01.978 "claim_type": "exclusive_write", 00:09:01.978 "zoned": false, 00:09:01.978 "supported_io_types": { 00:09:01.978 "read": true, 00:09:01.978 "write": true, 00:09:01.978 "unmap": true, 00:09:01.978 "flush": true, 00:09:01.978 "reset": true, 00:09:01.978 "nvme_admin": false, 00:09:01.978 "nvme_io": false, 00:09:01.978 "nvme_io_md": false, 00:09:01.978 "write_zeroes": true, 00:09:01.978 "zcopy": true, 00:09:01.978 "get_zone_info": false, 00:09:01.978 "zone_management": false, 00:09:01.978 "zone_append": false, 00:09:01.978 "compare": false, 00:09:01.978 "compare_and_write": false, 00:09:01.978 "abort": true, 00:09:01.978 "seek_hole": false, 00:09:01.978 "seek_data": false, 00:09:01.978 "copy": true, 00:09:01.978 "nvme_iov_md": false 00:09:01.978 }, 00:09:01.978 "memory_domains": [ 00:09:01.978 { 00:09:01.978 "dma_device_id": "system", 00:09:01.978 "dma_device_type": 1 00:09:01.978 }, 00:09:01.978 { 00:09:01.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.978 "dma_device_type": 2 00:09:01.978 } 00:09:01.978 ], 00:09:01.978 "driver_specific": {} 00:09:01.978 } 00:09:01.978 ] 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.978 "name": "Existed_Raid", 00:09:01.978 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:01.978 "strip_size_kb": 64, 00:09:01.978 "state": "configuring", 00:09:01.978 "raid_level": "raid0", 00:09:01.978 "superblock": true, 00:09:01.978 "num_base_bdevs": 3, 00:09:01.978 "num_base_bdevs_discovered": 2, 00:09:01.978 "num_base_bdevs_operational": 3, 00:09:01.978 "base_bdevs_list": [ 00:09:01.978 { 00:09:01.978 "name": "BaseBdev1", 00:09:01.978 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:01.978 "is_configured": true, 00:09:01.978 "data_offset": 2048, 00:09:01.978 "data_size": 63488 00:09:01.978 }, 00:09:01.978 { 00:09:01.978 "name": null, 00:09:01.978 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:01.978 "is_configured": false, 00:09:01.978 "data_offset": 0, 00:09:01.978 "data_size": 63488 00:09:01.978 }, 00:09:01.978 { 00:09:01.978 "name": "BaseBdev3", 00:09:01.978 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:01.978 "is_configured": true, 00:09:01.978 "data_offset": 2048, 00:09:01.978 "data_size": 63488 00:09:01.978 } 00:09:01.978 ] 00:09:01.978 }' 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.978 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.238 [2024-12-12 05:47:09.718400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.238 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.238 "name": "Existed_Raid", 00:09:02.238 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:02.238 "strip_size_kb": 64, 00:09:02.238 "state": "configuring", 00:09:02.238 "raid_level": "raid0", 00:09:02.238 "superblock": true, 00:09:02.238 "num_base_bdevs": 3, 00:09:02.238 "num_base_bdevs_discovered": 1, 00:09:02.238 "num_base_bdevs_operational": 3, 00:09:02.238 "base_bdevs_list": [ 00:09:02.238 { 00:09:02.238 "name": "BaseBdev1", 00:09:02.238 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:02.238 "is_configured": true, 00:09:02.238 "data_offset": 2048, 00:09:02.238 "data_size": 63488 00:09:02.238 }, 00:09:02.238 { 00:09:02.239 "name": null, 00:09:02.239 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:02.239 "is_configured": false, 00:09:02.239 "data_offset": 0, 00:09:02.239 "data_size": 63488 00:09:02.239 }, 00:09:02.239 { 00:09:02.239 "name": null, 00:09:02.239 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:02.239 "is_configured": false, 00:09:02.239 "data_offset": 0, 00:09:02.239 "data_size": 63488 00:09:02.239 } 00:09:02.239 ] 00:09:02.239 }' 00:09:02.239 05:47:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.239 05:47:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 [2024-12-12 05:47:10.233569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.804 "name": "Existed_Raid", 00:09:02.804 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:02.804 "strip_size_kb": 64, 00:09:02.804 "state": "configuring", 00:09:02.804 "raid_level": "raid0", 00:09:02.804 "superblock": true, 00:09:02.804 "num_base_bdevs": 3, 00:09:02.804 "num_base_bdevs_discovered": 2, 00:09:02.804 "num_base_bdevs_operational": 3, 00:09:02.804 "base_bdevs_list": [ 00:09:02.804 { 00:09:02.804 "name": "BaseBdev1", 00:09:02.804 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:02.804 "is_configured": true, 00:09:02.804 "data_offset": 2048, 00:09:02.804 "data_size": 63488 00:09:02.804 }, 00:09:02.804 { 00:09:02.804 "name": null, 00:09:02.804 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:02.804 "is_configured": false, 00:09:02.804 "data_offset": 0, 00:09:02.804 "data_size": 63488 00:09:02.804 }, 00:09:02.804 { 00:09:02.804 "name": "BaseBdev3", 00:09:02.804 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:02.804 "is_configured": true, 00:09:02.804 "data_offset": 2048, 00:09:02.804 "data_size": 63488 00:09:02.804 } 00:09:02.804 ] 00:09:02.804 }' 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.804 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.371 [2024-12-12 05:47:10.720723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.371 "name": "Existed_Raid", 00:09:03.371 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:03.371 "strip_size_kb": 64, 00:09:03.371 "state": "configuring", 00:09:03.371 "raid_level": "raid0", 00:09:03.371 "superblock": true, 00:09:03.371 "num_base_bdevs": 3, 00:09:03.371 "num_base_bdevs_discovered": 1, 00:09:03.371 "num_base_bdevs_operational": 3, 00:09:03.371 "base_bdevs_list": [ 00:09:03.371 { 00:09:03.371 "name": null, 00:09:03.371 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:03.371 "is_configured": false, 00:09:03.371 "data_offset": 0, 00:09:03.371 "data_size": 63488 00:09:03.371 }, 00:09:03.371 { 00:09:03.371 "name": null, 00:09:03.371 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:03.371 "is_configured": false, 00:09:03.371 "data_offset": 0, 00:09:03.371 "data_size": 63488 00:09:03.371 }, 00:09:03.371 { 00:09:03.371 "name": "BaseBdev3", 00:09:03.371 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:03.371 "is_configured": true, 00:09:03.371 "data_offset": 2048, 00:09:03.371 "data_size": 63488 00:09:03.371 } 00:09:03.371 ] 00:09:03.371 }' 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.371 05:47:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.941 [2024-12-12 05:47:11.294428] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.941 "name": "Existed_Raid", 00:09:03.941 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:03.941 "strip_size_kb": 64, 00:09:03.941 "state": "configuring", 00:09:03.941 "raid_level": "raid0", 00:09:03.941 "superblock": true, 00:09:03.941 "num_base_bdevs": 3, 00:09:03.941 "num_base_bdevs_discovered": 2, 00:09:03.941 "num_base_bdevs_operational": 3, 00:09:03.941 "base_bdevs_list": [ 00:09:03.941 { 00:09:03.941 "name": null, 00:09:03.941 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:03.941 "is_configured": false, 00:09:03.941 "data_offset": 0, 00:09:03.941 "data_size": 63488 00:09:03.941 }, 00:09:03.941 { 00:09:03.941 "name": "BaseBdev2", 00:09:03.941 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:03.941 "is_configured": true, 00:09:03.941 "data_offset": 2048, 00:09:03.941 "data_size": 63488 00:09:03.941 }, 00:09:03.941 { 00:09:03.941 "name": "BaseBdev3", 00:09:03.941 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:03.941 "is_configured": true, 00:09:03.941 "data_offset": 2048, 00:09:03.941 "data_size": 63488 00:09:03.941 } 00:09:03.941 ] 00:09:03.941 }' 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.941 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.200 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.200 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.200 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 296b355e-df5c-4f9e-b361-bc93a8c2d378 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.460 NewBaseBdev 00:09:04.460 [2024-12-12 05:47:11.844880] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:04.460 [2024-12-12 05:47:11.845071] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:04.460 [2024-12-12 05:47:11.845086] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:04.460 [2024-12-12 05:47:11.845321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:04.460 [2024-12-12 05:47:11.845467] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:04.460 [2024-12-12 05:47:11.845476] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:04.460 [2024-12-12 05:47:11.845635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.460 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.460 [ 00:09:04.460 { 00:09:04.460 "name": "NewBaseBdev", 00:09:04.460 "aliases": [ 00:09:04.460 "296b355e-df5c-4f9e-b361-bc93a8c2d378" 00:09:04.460 ], 00:09:04.460 "product_name": "Malloc disk", 00:09:04.460 "block_size": 512, 00:09:04.460 "num_blocks": 65536, 00:09:04.460 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:04.460 "assigned_rate_limits": { 00:09:04.460 "rw_ios_per_sec": 0, 00:09:04.460 "rw_mbytes_per_sec": 0, 00:09:04.460 "r_mbytes_per_sec": 0, 00:09:04.460 "w_mbytes_per_sec": 0 00:09:04.461 }, 00:09:04.461 "claimed": true, 00:09:04.461 "claim_type": "exclusive_write", 00:09:04.461 "zoned": false, 00:09:04.461 "supported_io_types": { 00:09:04.461 "read": true, 00:09:04.461 "write": true, 00:09:04.461 "unmap": true, 00:09:04.461 "flush": true, 00:09:04.461 "reset": true, 00:09:04.461 "nvme_admin": false, 00:09:04.461 "nvme_io": false, 00:09:04.461 "nvme_io_md": false, 00:09:04.461 "write_zeroes": true, 00:09:04.461 "zcopy": true, 00:09:04.461 "get_zone_info": false, 00:09:04.461 "zone_management": false, 00:09:04.461 "zone_append": false, 00:09:04.461 "compare": false, 00:09:04.461 "compare_and_write": false, 00:09:04.461 "abort": true, 00:09:04.461 "seek_hole": false, 00:09:04.461 "seek_data": false, 00:09:04.461 "copy": true, 00:09:04.461 "nvme_iov_md": false 00:09:04.461 }, 00:09:04.461 "memory_domains": [ 00:09:04.461 { 00:09:04.461 "dma_device_id": "system", 00:09:04.461 "dma_device_type": 1 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.461 "dma_device_type": 2 00:09:04.461 } 00:09:04.461 ], 00:09:04.461 "driver_specific": {} 00:09:04.461 } 00:09:04.461 ] 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.461 "name": "Existed_Raid", 00:09:04.461 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:04.461 "strip_size_kb": 64, 00:09:04.461 "state": "online", 00:09:04.461 "raid_level": "raid0", 00:09:04.461 "superblock": true, 00:09:04.461 "num_base_bdevs": 3, 00:09:04.461 "num_base_bdevs_discovered": 3, 00:09:04.461 "num_base_bdevs_operational": 3, 00:09:04.461 "base_bdevs_list": [ 00:09:04.461 { 00:09:04.461 "name": "NewBaseBdev", 00:09:04.461 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:04.461 "is_configured": true, 00:09:04.461 "data_offset": 2048, 00:09:04.461 "data_size": 63488 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "name": "BaseBdev2", 00:09:04.461 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:04.461 "is_configured": true, 00:09:04.461 "data_offset": 2048, 00:09:04.461 "data_size": 63488 00:09:04.461 }, 00:09:04.461 { 00:09:04.461 "name": "BaseBdev3", 00:09:04.461 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:04.461 "is_configured": true, 00:09:04.461 "data_offset": 2048, 00:09:04.461 "data_size": 63488 00:09:04.461 } 00:09:04.461 ] 00:09:04.461 }' 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.461 05:47:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.031 [2024-12-12 05:47:12.340364] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.031 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:05.031 "name": "Existed_Raid", 00:09:05.031 "aliases": [ 00:09:05.031 "fd602d72-4f0b-44a3-92a7-71dcf4d60e63" 00:09:05.031 ], 00:09:05.031 "product_name": "Raid Volume", 00:09:05.031 "block_size": 512, 00:09:05.031 "num_blocks": 190464, 00:09:05.031 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:05.031 "assigned_rate_limits": { 00:09:05.031 "rw_ios_per_sec": 0, 00:09:05.031 "rw_mbytes_per_sec": 0, 00:09:05.031 "r_mbytes_per_sec": 0, 00:09:05.031 "w_mbytes_per_sec": 0 00:09:05.031 }, 00:09:05.031 "claimed": false, 00:09:05.031 "zoned": false, 00:09:05.031 "supported_io_types": { 00:09:05.031 "read": true, 00:09:05.031 "write": true, 00:09:05.031 "unmap": true, 00:09:05.031 "flush": true, 00:09:05.031 "reset": true, 00:09:05.031 "nvme_admin": false, 00:09:05.031 "nvme_io": false, 00:09:05.031 "nvme_io_md": false, 00:09:05.031 "write_zeroes": true, 00:09:05.031 "zcopy": false, 00:09:05.031 "get_zone_info": false, 00:09:05.031 "zone_management": false, 00:09:05.031 "zone_append": false, 00:09:05.031 "compare": false, 00:09:05.031 "compare_and_write": false, 00:09:05.031 "abort": false, 00:09:05.031 "seek_hole": false, 00:09:05.031 "seek_data": false, 00:09:05.031 "copy": false, 00:09:05.031 "nvme_iov_md": false 00:09:05.031 }, 00:09:05.031 "memory_domains": [ 00:09:05.031 { 00:09:05.031 "dma_device_id": "system", 00:09:05.031 "dma_device_type": 1 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.031 "dma_device_type": 2 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "dma_device_id": "system", 00:09:05.031 "dma_device_type": 1 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.031 "dma_device_type": 2 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "dma_device_id": "system", 00:09:05.031 "dma_device_type": 1 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.031 "dma_device_type": 2 00:09:05.031 } 00:09:05.031 ], 00:09:05.031 "driver_specific": { 00:09:05.031 "raid": { 00:09:05.031 "uuid": "fd602d72-4f0b-44a3-92a7-71dcf4d60e63", 00:09:05.031 "strip_size_kb": 64, 00:09:05.031 "state": "online", 00:09:05.031 "raid_level": "raid0", 00:09:05.031 "superblock": true, 00:09:05.031 "num_base_bdevs": 3, 00:09:05.031 "num_base_bdevs_discovered": 3, 00:09:05.031 "num_base_bdevs_operational": 3, 00:09:05.031 "base_bdevs_list": [ 00:09:05.031 { 00:09:05.031 "name": "NewBaseBdev", 00:09:05.031 "uuid": "296b355e-df5c-4f9e-b361-bc93a8c2d378", 00:09:05.031 "is_configured": true, 00:09:05.031 "data_offset": 2048, 00:09:05.031 "data_size": 63488 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "name": "BaseBdev2", 00:09:05.031 "uuid": "39d154bb-b247-4cf2-9472-bf12192c2c61", 00:09:05.031 "is_configured": true, 00:09:05.031 "data_offset": 2048, 00:09:05.031 "data_size": 63488 00:09:05.031 }, 00:09:05.031 { 00:09:05.031 "name": "BaseBdev3", 00:09:05.031 "uuid": "102425f1-2536-4b23-8d18-1abdd46e6ee3", 00:09:05.032 "is_configured": true, 00:09:05.032 "data_offset": 2048, 00:09:05.032 "data_size": 63488 00:09:05.032 } 00:09:05.032 ] 00:09:05.032 } 00:09:05.032 } 00:09:05.032 }' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:05.032 BaseBdev2 00:09:05.032 BaseBdev3' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.032 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:05.292 [2024-12-12 05:47:12.607635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:05.292 [2024-12-12 05:47:12.607662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.292 [2024-12-12 05:47:12.607739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.292 [2024-12-12 05:47:12.607792] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.292 [2024-12-12 05:47:12.607804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 65407 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 65407 ']' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 65407 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65407 00:09:05.292 killing process with pid 65407 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65407' 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 65407 00:09:05.292 [2024-12-12 05:47:12.642067] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.292 05:47:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 65407 00:09:05.551 [2024-12-12 05:47:12.921613] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.490 ************************************ 00:09:06.490 END TEST raid_state_function_test_sb 00:09:06.490 ************************************ 00:09:06.490 05:47:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.490 00:09:06.490 real 0m10.294s 00:09:06.490 user 0m16.433s 00:09:06.490 sys 0m1.739s 00:09:06.490 05:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.490 05:47:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.750 05:47:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:06.750 05:47:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.750 05:47:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.750 05:47:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.750 ************************************ 00:09:06.750 START TEST raid_superblock_test 00:09:06.750 ************************************ 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66027 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66027 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66027 ']' 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.750 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.750 [2024-12-12 05:47:14.138035] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:06.750 [2024-12-12 05:47:14.138225] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66027 ] 00:09:07.010 [2024-12-12 05:47:14.311950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.010 [2024-12-12 05:47:14.419284] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.269 [2024-12-12 05:47:14.610097] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.270 [2024-12-12 05:47:14.610239] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.529 malloc1 00:09:07.529 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.530 05:47:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:07.530 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.530 05:47:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.530 [2024-12-12 05:47:15.006526] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:07.530 [2024-12-12 05:47:15.006600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.530 [2024-12-12 05:47:15.006621] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:07.530 [2024-12-12 05:47:15.006631] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.530 [2024-12-12 05:47:15.008649] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.530 [2024-12-12 05:47:15.008684] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:07.530 pt1 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.530 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 malloc2 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 [2024-12-12 05:47:15.060112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:07.791 [2024-12-12 05:47:15.060228] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.791 [2024-12-12 05:47:15.060268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:07.791 [2024-12-12 05:47:15.060322] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.791 [2024-12-12 05:47:15.062392] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.791 [2024-12-12 05:47:15.062462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:07.791 pt2 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 malloc3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 [2024-12-12 05:47:15.148378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:07.791 [2024-12-12 05:47:15.148485] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.791 [2024-12-12 05:47:15.148558] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:07.791 [2024-12-12 05:47:15.148599] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.791 [2024-12-12 05:47:15.150847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.791 [2024-12-12 05:47:15.150949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:07.791 pt3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 [2024-12-12 05:47:15.160399] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:07.791 [2024-12-12 05:47:15.162359] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:07.791 [2024-12-12 05:47:15.162467] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:07.791 [2024-12-12 05:47:15.162697] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:07.791 [2024-12-12 05:47:15.162748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:07.791 [2024-12-12 05:47:15.163045] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:07.791 [2024-12-12 05:47:15.163257] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:07.791 [2024-12-12 05:47:15.163298] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:07.791 [2024-12-12 05:47:15.163528] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.791 "name": "raid_bdev1", 00:09:07.791 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:07.791 "strip_size_kb": 64, 00:09:07.791 "state": "online", 00:09:07.791 "raid_level": "raid0", 00:09:07.791 "superblock": true, 00:09:07.791 "num_base_bdevs": 3, 00:09:07.791 "num_base_bdevs_discovered": 3, 00:09:07.791 "num_base_bdevs_operational": 3, 00:09:07.791 "base_bdevs_list": [ 00:09:07.791 { 00:09:07.791 "name": "pt1", 00:09:07.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:07.791 "is_configured": true, 00:09:07.791 "data_offset": 2048, 00:09:07.791 "data_size": 63488 00:09:07.791 }, 00:09:07.791 { 00:09:07.791 "name": "pt2", 00:09:07.791 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:07.791 "is_configured": true, 00:09:07.791 "data_offset": 2048, 00:09:07.791 "data_size": 63488 00:09:07.791 }, 00:09:07.791 { 00:09:07.791 "name": "pt3", 00:09:07.791 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:07.791 "is_configured": true, 00:09:07.791 "data_offset": 2048, 00:09:07.791 "data_size": 63488 00:09:07.791 } 00:09:07.791 ] 00:09:07.791 }' 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.791 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.359 [2024-12-12 05:47:15.611936] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.359 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:08.359 "name": "raid_bdev1", 00:09:08.359 "aliases": [ 00:09:08.359 "120d4aff-8927-4aaf-a368-41c5af26f74e" 00:09:08.359 ], 00:09:08.359 "product_name": "Raid Volume", 00:09:08.359 "block_size": 512, 00:09:08.359 "num_blocks": 190464, 00:09:08.359 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:08.359 "assigned_rate_limits": { 00:09:08.359 "rw_ios_per_sec": 0, 00:09:08.359 "rw_mbytes_per_sec": 0, 00:09:08.359 "r_mbytes_per_sec": 0, 00:09:08.359 "w_mbytes_per_sec": 0 00:09:08.359 }, 00:09:08.359 "claimed": false, 00:09:08.359 "zoned": false, 00:09:08.359 "supported_io_types": { 00:09:08.359 "read": true, 00:09:08.359 "write": true, 00:09:08.359 "unmap": true, 00:09:08.359 "flush": true, 00:09:08.360 "reset": true, 00:09:08.360 "nvme_admin": false, 00:09:08.360 "nvme_io": false, 00:09:08.360 "nvme_io_md": false, 00:09:08.360 "write_zeroes": true, 00:09:08.360 "zcopy": false, 00:09:08.360 "get_zone_info": false, 00:09:08.360 "zone_management": false, 00:09:08.360 "zone_append": false, 00:09:08.360 "compare": false, 00:09:08.360 "compare_and_write": false, 00:09:08.360 "abort": false, 00:09:08.360 "seek_hole": false, 00:09:08.360 "seek_data": false, 00:09:08.360 "copy": false, 00:09:08.360 "nvme_iov_md": false 00:09:08.360 }, 00:09:08.360 "memory_domains": [ 00:09:08.360 { 00:09:08.360 "dma_device_id": "system", 00:09:08.360 "dma_device_type": 1 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.360 "dma_device_type": 2 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "dma_device_id": "system", 00:09:08.360 "dma_device_type": 1 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.360 "dma_device_type": 2 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "dma_device_id": "system", 00:09:08.360 "dma_device_type": 1 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.360 "dma_device_type": 2 00:09:08.360 } 00:09:08.360 ], 00:09:08.360 "driver_specific": { 00:09:08.360 "raid": { 00:09:08.360 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:08.360 "strip_size_kb": 64, 00:09:08.360 "state": "online", 00:09:08.360 "raid_level": "raid0", 00:09:08.360 "superblock": true, 00:09:08.360 "num_base_bdevs": 3, 00:09:08.360 "num_base_bdevs_discovered": 3, 00:09:08.360 "num_base_bdevs_operational": 3, 00:09:08.360 "base_bdevs_list": [ 00:09:08.360 { 00:09:08.360 "name": "pt1", 00:09:08.360 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.360 "is_configured": true, 00:09:08.360 "data_offset": 2048, 00:09:08.360 "data_size": 63488 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "name": "pt2", 00:09:08.360 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.360 "is_configured": true, 00:09:08.360 "data_offset": 2048, 00:09:08.360 "data_size": 63488 00:09:08.360 }, 00:09:08.360 { 00:09:08.360 "name": "pt3", 00:09:08.360 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.360 "is_configured": true, 00:09:08.360 "data_offset": 2048, 00:09:08.360 "data_size": 63488 00:09:08.360 } 00:09:08.360 ] 00:09:08.360 } 00:09:08.360 } 00:09:08.360 }' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:08.360 pt2 00:09:08.360 pt3' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.360 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.360 [2024-12-12 05:47:15.863429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=120d4aff-8927-4aaf-a368-41c5af26f74e 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 120d4aff-8927-4aaf-a368-41c5af26f74e ']' 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 [2024-12-12 05:47:15.903092] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.622 [2024-12-12 05:47:15.903118] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:08.622 [2024-12-12 05:47:15.903191] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:08.622 [2024-12-12 05:47:15.903254] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:08.622 [2024-12-12 05:47:15.903263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.622 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.622 [2024-12-12 05:47:16.046895] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:08.622 [2024-12-12 05:47:16.048783] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:08.622 [2024-12-12 05:47:16.048834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:08.622 [2024-12-12 05:47:16.048883] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:08.622 [2024-12-12 05:47:16.048931] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:08.623 [2024-12-12 05:47:16.048949] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:08.623 [2024-12-12 05:47:16.048965] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:08.623 [2024-12-12 05:47:16.048976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:08.623 request: 00:09:08.623 { 00:09:08.623 "name": "raid_bdev1", 00:09:08.623 "raid_level": "raid0", 00:09:08.623 "base_bdevs": [ 00:09:08.623 "malloc1", 00:09:08.623 "malloc2", 00:09:08.623 "malloc3" 00:09:08.623 ], 00:09:08.623 "strip_size_kb": 64, 00:09:08.623 "superblock": false, 00:09:08.623 "method": "bdev_raid_create", 00:09:08.623 "req_id": 1 00:09:08.623 } 00:09:08.623 Got JSON-RPC error response 00:09:08.623 response: 00:09:08.623 { 00:09:08.623 "code": -17, 00:09:08.623 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:08.623 } 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.623 [2024-12-12 05:47:16.114722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:08.623 [2024-12-12 05:47:16.114813] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.623 [2024-12-12 05:47:16.114848] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:08.623 [2024-12-12 05:47:16.114878] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.623 [2024-12-12 05:47:16.117033] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.623 [2024-12-12 05:47:16.117102] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:08.623 [2024-12-12 05:47:16.117205] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:08.623 [2024-12-12 05:47:16.117281] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:08.623 pt1 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.623 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.890 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.890 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.890 "name": "raid_bdev1", 00:09:08.890 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:08.890 "strip_size_kb": 64, 00:09:08.890 "state": "configuring", 00:09:08.890 "raid_level": "raid0", 00:09:08.890 "superblock": true, 00:09:08.890 "num_base_bdevs": 3, 00:09:08.890 "num_base_bdevs_discovered": 1, 00:09:08.890 "num_base_bdevs_operational": 3, 00:09:08.890 "base_bdevs_list": [ 00:09:08.890 { 00:09:08.890 "name": "pt1", 00:09:08.890 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:08.890 "is_configured": true, 00:09:08.890 "data_offset": 2048, 00:09:08.890 "data_size": 63488 00:09:08.890 }, 00:09:08.890 { 00:09:08.890 "name": null, 00:09:08.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:08.890 "is_configured": false, 00:09:08.890 "data_offset": 2048, 00:09:08.890 "data_size": 63488 00:09:08.890 }, 00:09:08.890 { 00:09:08.890 "name": null, 00:09:08.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:08.890 "is_configured": false, 00:09:08.890 "data_offset": 2048, 00:09:08.890 "data_size": 63488 00:09:08.890 } 00:09:08.890 ] 00:09:08.890 }' 00:09:08.890 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.890 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.150 [2024-12-12 05:47:16.546063] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.150 [2024-12-12 05:47:16.546129] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.150 [2024-12-12 05:47:16.546154] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:09.150 [2024-12-12 05:47:16.546163] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.150 [2024-12-12 05:47:16.546624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.150 [2024-12-12 05:47:16.546652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.150 [2024-12-12 05:47:16.546749] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:09.150 [2024-12-12 05:47:16.546782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.150 pt2 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.150 [2024-12-12 05:47:16.558048] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.150 "name": "raid_bdev1", 00:09:09.150 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:09.150 "strip_size_kb": 64, 00:09:09.150 "state": "configuring", 00:09:09.150 "raid_level": "raid0", 00:09:09.150 "superblock": true, 00:09:09.150 "num_base_bdevs": 3, 00:09:09.150 "num_base_bdevs_discovered": 1, 00:09:09.150 "num_base_bdevs_operational": 3, 00:09:09.150 "base_bdevs_list": [ 00:09:09.150 { 00:09:09.150 "name": "pt1", 00:09:09.150 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.150 "is_configured": true, 00:09:09.150 "data_offset": 2048, 00:09:09.150 "data_size": 63488 00:09:09.150 }, 00:09:09.150 { 00:09:09.150 "name": null, 00:09:09.150 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.150 "is_configured": false, 00:09:09.150 "data_offset": 0, 00:09:09.150 "data_size": 63488 00:09:09.150 }, 00:09:09.150 { 00:09:09.150 "name": null, 00:09:09.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.150 "is_configured": false, 00:09:09.150 "data_offset": 2048, 00:09:09.150 "data_size": 63488 00:09:09.150 } 00:09:09.150 ] 00:09:09.150 }' 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.150 05:47:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.720 [2024-12-12 05:47:17.017245] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:09.720 [2024-12-12 05:47:17.017347] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.720 [2024-12-12 05:47:17.017382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:09.720 [2024-12-12 05:47:17.017411] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.720 [2024-12-12 05:47:17.017933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.720 [2024-12-12 05:47:17.017996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:09.720 [2024-12-12 05:47:17.018122] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:09.720 [2024-12-12 05:47:17.018195] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:09.720 pt2 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.720 [2024-12-12 05:47:17.025216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:09.720 [2024-12-12 05:47:17.025294] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:09.720 [2024-12-12 05:47:17.025322] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:09.720 [2024-12-12 05:47:17.025353] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:09.720 [2024-12-12 05:47:17.025785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:09.720 [2024-12-12 05:47:17.025844] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:09.720 [2024-12-12 05:47:17.025937] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:09.720 [2024-12-12 05:47:17.025986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:09.720 [2024-12-12 05:47:17.026142] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.720 [2024-12-12 05:47:17.026183] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:09.720 [2024-12-12 05:47:17.026495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:09.720 [2024-12-12 05:47:17.026706] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.720 [2024-12-12 05:47:17.026746] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:09.720 [2024-12-12 05:47:17.026954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.720 pt3 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.720 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.720 "name": "raid_bdev1", 00:09:09.720 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:09.720 "strip_size_kb": 64, 00:09:09.720 "state": "online", 00:09:09.720 "raid_level": "raid0", 00:09:09.720 "superblock": true, 00:09:09.720 "num_base_bdevs": 3, 00:09:09.720 "num_base_bdevs_discovered": 3, 00:09:09.720 "num_base_bdevs_operational": 3, 00:09:09.720 "base_bdevs_list": [ 00:09:09.720 { 00:09:09.720 "name": "pt1", 00:09:09.720 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:09.720 "is_configured": true, 00:09:09.720 "data_offset": 2048, 00:09:09.720 "data_size": 63488 00:09:09.720 }, 00:09:09.720 { 00:09:09.720 "name": "pt2", 00:09:09.720 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:09.720 "is_configured": true, 00:09:09.720 "data_offset": 2048, 00:09:09.721 "data_size": 63488 00:09:09.721 }, 00:09:09.721 { 00:09:09.721 "name": "pt3", 00:09:09.721 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:09.721 "is_configured": true, 00:09:09.721 "data_offset": 2048, 00:09:09.721 "data_size": 63488 00:09:09.721 } 00:09:09.721 ] 00:09:09.721 }' 00:09:09.721 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.721 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.980 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.270 [2024-12-12 05:47:17.504720] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:10.270 "name": "raid_bdev1", 00:09:10.270 "aliases": [ 00:09:10.270 "120d4aff-8927-4aaf-a368-41c5af26f74e" 00:09:10.270 ], 00:09:10.270 "product_name": "Raid Volume", 00:09:10.270 "block_size": 512, 00:09:10.270 "num_blocks": 190464, 00:09:10.270 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:10.270 "assigned_rate_limits": { 00:09:10.270 "rw_ios_per_sec": 0, 00:09:10.270 "rw_mbytes_per_sec": 0, 00:09:10.270 "r_mbytes_per_sec": 0, 00:09:10.270 "w_mbytes_per_sec": 0 00:09:10.270 }, 00:09:10.270 "claimed": false, 00:09:10.270 "zoned": false, 00:09:10.270 "supported_io_types": { 00:09:10.270 "read": true, 00:09:10.270 "write": true, 00:09:10.270 "unmap": true, 00:09:10.270 "flush": true, 00:09:10.270 "reset": true, 00:09:10.270 "nvme_admin": false, 00:09:10.270 "nvme_io": false, 00:09:10.270 "nvme_io_md": false, 00:09:10.270 "write_zeroes": true, 00:09:10.270 "zcopy": false, 00:09:10.270 "get_zone_info": false, 00:09:10.270 "zone_management": false, 00:09:10.270 "zone_append": false, 00:09:10.270 "compare": false, 00:09:10.270 "compare_and_write": false, 00:09:10.270 "abort": false, 00:09:10.270 "seek_hole": false, 00:09:10.270 "seek_data": false, 00:09:10.270 "copy": false, 00:09:10.270 "nvme_iov_md": false 00:09:10.270 }, 00:09:10.270 "memory_domains": [ 00:09:10.270 { 00:09:10.270 "dma_device_id": "system", 00:09:10.270 "dma_device_type": 1 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.270 "dma_device_type": 2 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "dma_device_id": "system", 00:09:10.270 "dma_device_type": 1 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.270 "dma_device_type": 2 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "dma_device_id": "system", 00:09:10.270 "dma_device_type": 1 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.270 "dma_device_type": 2 00:09:10.270 } 00:09:10.270 ], 00:09:10.270 "driver_specific": { 00:09:10.270 "raid": { 00:09:10.270 "uuid": "120d4aff-8927-4aaf-a368-41c5af26f74e", 00:09:10.270 "strip_size_kb": 64, 00:09:10.270 "state": "online", 00:09:10.270 "raid_level": "raid0", 00:09:10.270 "superblock": true, 00:09:10.270 "num_base_bdevs": 3, 00:09:10.270 "num_base_bdevs_discovered": 3, 00:09:10.270 "num_base_bdevs_operational": 3, 00:09:10.270 "base_bdevs_list": [ 00:09:10.270 { 00:09:10.270 "name": "pt1", 00:09:10.270 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:10.270 "is_configured": true, 00:09:10.270 "data_offset": 2048, 00:09:10.270 "data_size": 63488 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "name": "pt2", 00:09:10.270 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:10.270 "is_configured": true, 00:09:10.270 "data_offset": 2048, 00:09:10.270 "data_size": 63488 00:09:10.270 }, 00:09:10.270 { 00:09:10.270 "name": "pt3", 00:09:10.270 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:10.270 "is_configured": true, 00:09:10.270 "data_offset": 2048, 00:09:10.270 "data_size": 63488 00:09:10.270 } 00:09:10.270 ] 00:09:10.270 } 00:09:10.270 } 00:09:10.270 }' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:10.270 pt2 00:09:10.270 pt3' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:10.270 [2024-12-12 05:47:17.760222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:10.270 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 120d4aff-8927-4aaf-a368-41c5af26f74e '!=' 120d4aff-8927-4aaf-a368-41c5af26f74e ']' 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66027 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66027 ']' 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66027 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66027 00:09:10.530 killing process with pid 66027 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66027' 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66027 00:09:10.530 [2024-12-12 05:47:17.842864] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.530 [2024-12-12 05:47:17.842951] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.530 [2024-12-12 05:47:17.843009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:10.530 [2024-12-12 05:47:17.843020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:10.530 05:47:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66027 00:09:10.790 [2024-12-12 05:47:18.126212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.729 ************************************ 00:09:11.729 END TEST raid_superblock_test 00:09:11.729 ************************************ 00:09:11.729 05:47:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:11.729 00:09:11.729 real 0m5.128s 00:09:11.729 user 0m7.421s 00:09:11.729 sys 0m0.846s 00:09:11.729 05:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.729 05:47:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.729 05:47:19 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:11.729 05:47:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:11.729 05:47:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.729 05:47:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.729 ************************************ 00:09:11.729 START TEST raid_read_error_test 00:09:11.729 ************************************ 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.729 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.730 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:11.730 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.730 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.730 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:11.989 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.dPYCmbHcQG 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66285 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66285 00:09:11.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 66285 ']' 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.990 05:47:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.990 [2024-12-12 05:47:19.346440] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:11.990 [2024-12-12 05:47:19.346567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66285 ] 00:09:12.249 [2024-12-12 05:47:19.514124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.249 [2024-12-12 05:47:19.615628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.509 [2024-12-12 05:47:19.810452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.509 [2024-12-12 05:47:19.810598] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 BaseBdev1_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 true 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 [2024-12-12 05:47:20.208831] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:12.769 [2024-12-12 05:47:20.209097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.769 [2024-12-12 05:47:20.209170] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:12.769 [2024-12-12 05:47:20.209216] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.769 [2024-12-12 05:47:20.211284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.769 [2024-12-12 05:47:20.211439] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:12.769 BaseBdev1 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 BaseBdev2_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 true 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.769 [2024-12-12 05:47:20.270255] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:12.769 [2024-12-12 05:47:20.270483] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:12.769 [2024-12-12 05:47:20.270563] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:12.769 [2024-12-12 05:47:20.270658] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:12.769 [2024-12-12 05:47:20.272747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:12.769 [2024-12-12 05:47:20.272920] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:12.769 BaseBdev2 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:12.769 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 BaseBdev3_malloc 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 true 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 [2024-12-12 05:47:20.350537] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:13.030 [2024-12-12 05:47:20.350585] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.030 [2024-12-12 05:47:20.350601] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:13.030 [2024-12-12 05:47:20.350611] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.030 [2024-12-12 05:47:20.352583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.030 [2024-12-12 05:47:20.352680] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:13.030 BaseBdev3 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 [2024-12-12 05:47:20.362591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:13.030 [2024-12-12 05:47:20.364334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:13.030 [2024-12-12 05:47:20.364464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.030 [2024-12-12 05:47:20.364675] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:13.030 [2024-12-12 05:47:20.364691] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:13.030 [2024-12-12 05:47:20.364920] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:13.030 [2024-12-12 05:47:20.365063] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:13.030 [2024-12-12 05:47:20.365076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:13.030 [2024-12-12 05:47:20.365206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.030 "name": "raid_bdev1", 00:09:13.030 "uuid": "87ae560a-9c72-4f4c-a546-c368da5b9fad", 00:09:13.030 "strip_size_kb": 64, 00:09:13.030 "state": "online", 00:09:13.030 "raid_level": "raid0", 00:09:13.030 "superblock": true, 00:09:13.030 "num_base_bdevs": 3, 00:09:13.030 "num_base_bdevs_discovered": 3, 00:09:13.030 "num_base_bdevs_operational": 3, 00:09:13.030 "base_bdevs_list": [ 00:09:13.030 { 00:09:13.030 "name": "BaseBdev1", 00:09:13.030 "uuid": "6f34325c-3994-5c8f-9b2b-12d6ebb00376", 00:09:13.030 "is_configured": true, 00:09:13.030 "data_offset": 2048, 00:09:13.030 "data_size": 63488 00:09:13.030 }, 00:09:13.030 { 00:09:13.030 "name": "BaseBdev2", 00:09:13.030 "uuid": "0206eca8-18e3-5e73-be89-5b7354b35ecd", 00:09:13.030 "is_configured": true, 00:09:13.030 "data_offset": 2048, 00:09:13.030 "data_size": 63488 00:09:13.030 }, 00:09:13.030 { 00:09:13.030 "name": "BaseBdev3", 00:09:13.030 "uuid": "29449b20-adbe-5be4-8cd6-e20f1b5bddba", 00:09:13.030 "is_configured": true, 00:09:13.030 "data_offset": 2048, 00:09:13.030 "data_size": 63488 00:09:13.030 } 00:09:13.030 ] 00:09:13.030 }' 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.030 05:47:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.290 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:13.290 05:47:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:13.549 [2024-12-12 05:47:20.874932] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.490 "name": "raid_bdev1", 00:09:14.490 "uuid": "87ae560a-9c72-4f4c-a546-c368da5b9fad", 00:09:14.490 "strip_size_kb": 64, 00:09:14.490 "state": "online", 00:09:14.490 "raid_level": "raid0", 00:09:14.490 "superblock": true, 00:09:14.490 "num_base_bdevs": 3, 00:09:14.490 "num_base_bdevs_discovered": 3, 00:09:14.490 "num_base_bdevs_operational": 3, 00:09:14.490 "base_bdevs_list": [ 00:09:14.490 { 00:09:14.490 "name": "BaseBdev1", 00:09:14.490 "uuid": "6f34325c-3994-5c8f-9b2b-12d6ebb00376", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 2048, 00:09:14.490 "data_size": 63488 00:09:14.490 }, 00:09:14.490 { 00:09:14.490 "name": "BaseBdev2", 00:09:14.490 "uuid": "0206eca8-18e3-5e73-be89-5b7354b35ecd", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 2048, 00:09:14.490 "data_size": 63488 00:09:14.490 }, 00:09:14.490 { 00:09:14.490 "name": "BaseBdev3", 00:09:14.490 "uuid": "29449b20-adbe-5be4-8cd6-e20f1b5bddba", 00:09:14.490 "is_configured": true, 00:09:14.490 "data_offset": 2048, 00:09:14.490 "data_size": 63488 00:09:14.490 } 00:09:14.490 ] 00:09:14.490 }' 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.490 05:47:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.750 [2024-12-12 05:47:22.242757] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.750 [2024-12-12 05:47:22.242789] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.750 [2024-12-12 05:47:22.245334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.750 [2024-12-12 05:47:22.245373] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.750 [2024-12-12 05:47:22.245408] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.750 [2024-12-12 05:47:22.245417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:14.750 { 00:09:14.750 "results": [ 00:09:14.750 { 00:09:14.750 "job": "raid_bdev1", 00:09:14.750 "core_mask": "0x1", 00:09:14.750 "workload": "randrw", 00:09:14.750 "percentage": 50, 00:09:14.750 "status": "finished", 00:09:14.750 "queue_depth": 1, 00:09:14.750 "io_size": 131072, 00:09:14.750 "runtime": 1.368733, 00:09:14.750 "iops": 16361.11644856959, 00:09:14.750 "mibps": 2045.1395560711987, 00:09:14.750 "io_failed": 1, 00:09:14.750 "io_timeout": 0, 00:09:14.750 "avg_latency_us": 84.68456468858555, 00:09:14.750 "min_latency_us": 19.786899563318777, 00:09:14.750 "max_latency_us": 1373.6803493449781 00:09:14.750 } 00:09:14.750 ], 00:09:14.750 "core_count": 1 00:09:14.750 } 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66285 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 66285 ']' 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 66285 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.750 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66285 00:09:15.011 killing process with pid 66285 00:09:15.011 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.011 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.011 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66285' 00:09:15.011 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 66285 00:09:15.011 [2024-12-12 05:47:22.288010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.011 05:47:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 66285 00:09:15.011 [2024-12-12 05:47:22.511035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.dPYCmbHcQG 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:16.397 00:09:16.397 real 0m4.386s 00:09:16.397 user 0m5.209s 00:09:16.397 sys 0m0.521s 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.397 ************************************ 00:09:16.397 END TEST raid_read_error_test 00:09:16.397 ************************************ 00:09:16.397 05:47:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.397 05:47:23 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:16.397 05:47:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:16.397 05:47:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.397 05:47:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:16.397 ************************************ 00:09:16.397 START TEST raid_write_error_test 00:09:16.397 ************************************ 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:16.397 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.Koed8nB7vD 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=66426 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 66426 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 66426 ']' 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:16.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.398 05:47:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.398 [2024-12-12 05:47:23.801202] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:16.398 [2024-12-12 05:47:23.801415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66426 ] 00:09:16.664 [2024-12-12 05:47:23.970960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.664 [2024-12-12 05:47:24.082455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.923 [2024-12-12 05:47:24.273440] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:16.923 [2024-12-12 05:47:24.273475] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.182 BaseBdev1_malloc 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.182 true 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.182 [2024-12-12 05:47:24.675680] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:17.182 [2024-12-12 05:47:24.675743] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.182 [2024-12-12 05:47:24.675764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:17.182 [2024-12-12 05:47:24.675775] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.182 [2024-12-12 05:47:24.677914] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.182 [2024-12-12 05:47:24.677954] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:17.182 BaseBdev1 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.182 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 BaseBdev2_malloc 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 true 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 [2024-12-12 05:47:24.742087] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:17.443 [2024-12-12 05:47:24.742153] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.443 [2024-12-12 05:47:24.742187] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:17.443 [2024-12-12 05:47:24.742198] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.443 [2024-12-12 05:47:24.744390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.443 [2024-12-12 05:47:24.744431] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:17.443 BaseBdev2 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 BaseBdev3_malloc 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 true 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 [2024-12-12 05:47:24.816085] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:17.443 [2024-12-12 05:47:24.816144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:17.443 [2024-12-12 05:47:24.816162] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:17.443 [2024-12-12 05:47:24.816172] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:17.443 [2024-12-12 05:47:24.818327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:17.443 [2024-12-12 05:47:24.818367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:17.443 BaseBdev3 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 [2024-12-12 05:47:24.828137] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:17.443 [2024-12-12 05:47:24.830054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:17.443 [2024-12-12 05:47:24.830126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:17.443 [2024-12-12 05:47:24.830329] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:17.443 [2024-12-12 05:47:24.830344] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:17.443 [2024-12-12 05:47:24.830599] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:17.443 [2024-12-12 05:47:24.830774] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:17.443 [2024-12-12 05:47:24.830798] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:17.443 [2024-12-12 05:47:24.830980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:17.443 "name": "raid_bdev1", 00:09:17.443 "uuid": "fd149494-7c2f-4cae-ad6c-cb4a8730ec80", 00:09:17.443 "strip_size_kb": 64, 00:09:17.443 "state": "online", 00:09:17.443 "raid_level": "raid0", 00:09:17.443 "superblock": true, 00:09:17.443 "num_base_bdevs": 3, 00:09:17.443 "num_base_bdevs_discovered": 3, 00:09:17.443 "num_base_bdevs_operational": 3, 00:09:17.443 "base_bdevs_list": [ 00:09:17.443 { 00:09:17.443 "name": "BaseBdev1", 00:09:17.443 "uuid": "be18e678-3ffc-57a3-bc6f-afe4c28aac85", 00:09:17.443 "is_configured": true, 00:09:17.443 "data_offset": 2048, 00:09:17.443 "data_size": 63488 00:09:17.443 }, 00:09:17.443 { 00:09:17.443 "name": "BaseBdev2", 00:09:17.443 "uuid": "37bd2709-b470-58e7-b2ef-8d6d3f4ea91f", 00:09:17.443 "is_configured": true, 00:09:17.443 "data_offset": 2048, 00:09:17.443 "data_size": 63488 00:09:17.443 }, 00:09:17.443 { 00:09:17.443 "name": "BaseBdev3", 00:09:17.443 "uuid": "e0ea0155-aad7-5704-8612-e86ca3e59093", 00:09:17.443 "is_configured": true, 00:09:17.443 "data_offset": 2048, 00:09:17.443 "data_size": 63488 00:09:17.443 } 00:09:17.443 ] 00:09:17.443 }' 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:17.443 05:47:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.017 05:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.017 05:47:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:18.018 [2024-12-12 05:47:25.344418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.956 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.957 "name": "raid_bdev1", 00:09:18.957 "uuid": "fd149494-7c2f-4cae-ad6c-cb4a8730ec80", 00:09:18.957 "strip_size_kb": 64, 00:09:18.957 "state": "online", 00:09:18.957 "raid_level": "raid0", 00:09:18.957 "superblock": true, 00:09:18.957 "num_base_bdevs": 3, 00:09:18.957 "num_base_bdevs_discovered": 3, 00:09:18.957 "num_base_bdevs_operational": 3, 00:09:18.957 "base_bdevs_list": [ 00:09:18.957 { 00:09:18.957 "name": "BaseBdev1", 00:09:18.957 "uuid": "be18e678-3ffc-57a3-bc6f-afe4c28aac85", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev2", 00:09:18.957 "uuid": "37bd2709-b470-58e7-b2ef-8d6d3f4ea91f", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 }, 00:09:18.957 { 00:09:18.957 "name": "BaseBdev3", 00:09:18.957 "uuid": "e0ea0155-aad7-5704-8612-e86ca3e59093", 00:09:18.957 "is_configured": true, 00:09:18.957 "data_offset": 2048, 00:09:18.957 "data_size": 63488 00:09:18.957 } 00:09:18.957 ] 00:09:18.957 }' 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.957 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.216 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.217 [2024-12-12 05:47:26.712197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:19.217 [2024-12-12 05:47:26.712281] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.217 [2024-12-12 05:47:26.714884] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.217 [2024-12-12 05:47:26.714970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.217 [2024-12-12 05:47:26.715024] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.217 [2024-12-12 05:47:26.715062] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:19.217 { 00:09:19.217 "results": [ 00:09:19.217 { 00:09:19.217 "job": "raid_bdev1", 00:09:19.217 "core_mask": "0x1", 00:09:19.217 "workload": "randrw", 00:09:19.217 "percentage": 50, 00:09:19.217 "status": "finished", 00:09:19.217 "queue_depth": 1, 00:09:19.217 "io_size": 131072, 00:09:19.217 "runtime": 1.368784, 00:09:19.217 "iops": 16123.06981963553, 00:09:19.217 "mibps": 2015.3837274544412, 00:09:19.217 "io_failed": 1, 00:09:19.217 "io_timeout": 0, 00:09:19.217 "avg_latency_us": 85.99945785838231, 00:09:19.217 "min_latency_us": 25.2646288209607, 00:09:19.217 "max_latency_us": 1380.8349344978167 00:09:19.217 } 00:09:19.217 ], 00:09:19.217 "core_count": 1 00:09:19.217 } 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 66426 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 66426 ']' 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 66426 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.217 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66426 00:09:19.476 killing process with pid 66426 00:09:19.476 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.476 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.476 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66426' 00:09:19.476 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 66426 00:09:19.476 [2024-12-12 05:47:26.761021] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.476 05:47:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 66426 00:09:19.476 [2024-12-12 05:47:26.971585] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.Koed8nB7vD 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:20.855 00:09:20.855 real 0m4.406s 00:09:20.855 user 0m5.214s 00:09:20.855 sys 0m0.559s 00:09:20.855 ************************************ 00:09:20.855 END TEST raid_write_error_test 00:09:20.855 ************************************ 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:20.855 05:47:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.855 05:47:28 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:20.855 05:47:28 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:20.855 05:47:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:20.855 05:47:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:20.855 05:47:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.855 ************************************ 00:09:20.855 START TEST raid_state_function_test 00:09:20.855 ************************************ 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.855 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=66564 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66564' 00:09:20.856 Process raid pid: 66564 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 66564 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 66564 ']' 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:20.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:20.856 05:47:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.856 [2024-12-12 05:47:28.264793] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:20.856 [2024-12-12 05:47:28.265340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.115 [2024-12-12 05:47:28.419989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.115 [2024-12-12 05:47:28.530613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.374 [2024-12-12 05:47:28.733575] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.374 [2024-12-12 05:47:28.733694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.634 [2024-12-12 05:47:29.090580] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.634 [2024-12-12 05:47:29.090689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.634 [2024-12-12 05:47:29.090721] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.634 [2024-12-12 05:47:29.090734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.634 [2024-12-12 05:47:29.090741] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.634 [2024-12-12 05:47:29.090750] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.634 "name": "Existed_Raid", 00:09:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.634 "strip_size_kb": 64, 00:09:21.634 "state": "configuring", 00:09:21.634 "raid_level": "concat", 00:09:21.634 "superblock": false, 00:09:21.634 "num_base_bdevs": 3, 00:09:21.634 "num_base_bdevs_discovered": 0, 00:09:21.634 "num_base_bdevs_operational": 3, 00:09:21.634 "base_bdevs_list": [ 00:09:21.634 { 00:09:21.634 "name": "BaseBdev1", 00:09:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.634 "is_configured": false, 00:09:21.634 "data_offset": 0, 00:09:21.634 "data_size": 0 00:09:21.634 }, 00:09:21.634 { 00:09:21.634 "name": "BaseBdev2", 00:09:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.634 "is_configured": false, 00:09:21.634 "data_offset": 0, 00:09:21.634 "data_size": 0 00:09:21.634 }, 00:09:21.634 { 00:09:21.634 "name": "BaseBdev3", 00:09:21.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.634 "is_configured": false, 00:09:21.634 "data_offset": 0, 00:09:21.634 "data_size": 0 00:09:21.634 } 00:09:21.634 ] 00:09:21.634 }' 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.634 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 [2024-12-12 05:47:29.545749] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.204 [2024-12-12 05:47:29.545785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 [2024-12-12 05:47:29.557739] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.204 [2024-12-12 05:47:29.557786] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.204 [2024-12-12 05:47:29.557795] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.204 [2024-12-12 05:47:29.557804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.204 [2024-12-12 05:47:29.557810] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.204 [2024-12-12 05:47:29.557820] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 [2024-12-12 05:47:29.604818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.204 BaseBdev1 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.204 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.204 [ 00:09:22.204 { 00:09:22.204 "name": "BaseBdev1", 00:09:22.204 "aliases": [ 00:09:22.204 "a0092086-e544-42ce-9d0b-0d70766b9750" 00:09:22.204 ], 00:09:22.204 "product_name": "Malloc disk", 00:09:22.204 "block_size": 512, 00:09:22.204 "num_blocks": 65536, 00:09:22.204 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:22.204 "assigned_rate_limits": { 00:09:22.204 "rw_ios_per_sec": 0, 00:09:22.204 "rw_mbytes_per_sec": 0, 00:09:22.204 "r_mbytes_per_sec": 0, 00:09:22.204 "w_mbytes_per_sec": 0 00:09:22.204 }, 00:09:22.204 "claimed": true, 00:09:22.205 "claim_type": "exclusive_write", 00:09:22.205 "zoned": false, 00:09:22.205 "supported_io_types": { 00:09:22.205 "read": true, 00:09:22.205 "write": true, 00:09:22.205 "unmap": true, 00:09:22.205 "flush": true, 00:09:22.205 "reset": true, 00:09:22.205 "nvme_admin": false, 00:09:22.205 "nvme_io": false, 00:09:22.205 "nvme_io_md": false, 00:09:22.205 "write_zeroes": true, 00:09:22.205 "zcopy": true, 00:09:22.205 "get_zone_info": false, 00:09:22.205 "zone_management": false, 00:09:22.205 "zone_append": false, 00:09:22.205 "compare": false, 00:09:22.205 "compare_and_write": false, 00:09:22.205 "abort": true, 00:09:22.205 "seek_hole": false, 00:09:22.205 "seek_data": false, 00:09:22.205 "copy": true, 00:09:22.205 "nvme_iov_md": false 00:09:22.205 }, 00:09:22.205 "memory_domains": [ 00:09:22.205 { 00:09:22.205 "dma_device_id": "system", 00:09:22.205 "dma_device_type": 1 00:09:22.205 }, 00:09:22.205 { 00:09:22.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.205 "dma_device_type": 2 00:09:22.205 } 00:09:22.205 ], 00:09:22.205 "driver_specific": {} 00:09:22.205 } 00:09:22.205 ] 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.205 "name": "Existed_Raid", 00:09:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.205 "strip_size_kb": 64, 00:09:22.205 "state": "configuring", 00:09:22.205 "raid_level": "concat", 00:09:22.205 "superblock": false, 00:09:22.205 "num_base_bdevs": 3, 00:09:22.205 "num_base_bdevs_discovered": 1, 00:09:22.205 "num_base_bdevs_operational": 3, 00:09:22.205 "base_bdevs_list": [ 00:09:22.205 { 00:09:22.205 "name": "BaseBdev1", 00:09:22.205 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:22.205 "is_configured": true, 00:09:22.205 "data_offset": 0, 00:09:22.205 "data_size": 65536 00:09:22.205 }, 00:09:22.205 { 00:09:22.205 "name": "BaseBdev2", 00:09:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.205 "is_configured": false, 00:09:22.205 "data_offset": 0, 00:09:22.205 "data_size": 0 00:09:22.205 }, 00:09:22.205 { 00:09:22.205 "name": "BaseBdev3", 00:09:22.205 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.205 "is_configured": false, 00:09:22.205 "data_offset": 0, 00:09:22.205 "data_size": 0 00:09:22.205 } 00:09:22.205 ] 00:09:22.205 }' 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.205 05:47:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.774 [2024-12-12 05:47:30.040133] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.774 [2024-12-12 05:47:30.040247] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.774 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.774 [2024-12-12 05:47:30.052153] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.774 [2024-12-12 05:47:30.054067] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.775 [2024-12-12 05:47:30.054112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.775 [2024-12-12 05:47:30.054122] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.775 [2024-12-12 05:47:30.054131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.775 "name": "Existed_Raid", 00:09:22.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.775 "strip_size_kb": 64, 00:09:22.775 "state": "configuring", 00:09:22.775 "raid_level": "concat", 00:09:22.775 "superblock": false, 00:09:22.775 "num_base_bdevs": 3, 00:09:22.775 "num_base_bdevs_discovered": 1, 00:09:22.775 "num_base_bdevs_operational": 3, 00:09:22.775 "base_bdevs_list": [ 00:09:22.775 { 00:09:22.775 "name": "BaseBdev1", 00:09:22.775 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:22.775 "is_configured": true, 00:09:22.775 "data_offset": 0, 00:09:22.775 "data_size": 65536 00:09:22.775 }, 00:09:22.775 { 00:09:22.775 "name": "BaseBdev2", 00:09:22.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.775 "is_configured": false, 00:09:22.775 "data_offset": 0, 00:09:22.775 "data_size": 0 00:09:22.775 }, 00:09:22.775 { 00:09:22.775 "name": "BaseBdev3", 00:09:22.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.775 "is_configured": false, 00:09:22.775 "data_offset": 0, 00:09:22.775 "data_size": 0 00:09:22.775 } 00:09:22.775 ] 00:09:22.775 }' 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.775 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 [2024-12-12 05:47:30.481612] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:23.035 BaseBdev2 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 [ 00:09:23.035 { 00:09:23.035 "name": "BaseBdev2", 00:09:23.035 "aliases": [ 00:09:23.035 "c1add28f-606b-4b5d-bf98-bf11cb712bee" 00:09:23.035 ], 00:09:23.035 "product_name": "Malloc disk", 00:09:23.035 "block_size": 512, 00:09:23.035 "num_blocks": 65536, 00:09:23.035 "uuid": "c1add28f-606b-4b5d-bf98-bf11cb712bee", 00:09:23.035 "assigned_rate_limits": { 00:09:23.035 "rw_ios_per_sec": 0, 00:09:23.035 "rw_mbytes_per_sec": 0, 00:09:23.035 "r_mbytes_per_sec": 0, 00:09:23.035 "w_mbytes_per_sec": 0 00:09:23.035 }, 00:09:23.035 "claimed": true, 00:09:23.035 "claim_type": "exclusive_write", 00:09:23.035 "zoned": false, 00:09:23.035 "supported_io_types": { 00:09:23.035 "read": true, 00:09:23.035 "write": true, 00:09:23.035 "unmap": true, 00:09:23.035 "flush": true, 00:09:23.035 "reset": true, 00:09:23.035 "nvme_admin": false, 00:09:23.035 "nvme_io": false, 00:09:23.035 "nvme_io_md": false, 00:09:23.035 "write_zeroes": true, 00:09:23.035 "zcopy": true, 00:09:23.035 "get_zone_info": false, 00:09:23.035 "zone_management": false, 00:09:23.035 "zone_append": false, 00:09:23.035 "compare": false, 00:09:23.035 "compare_and_write": false, 00:09:23.035 "abort": true, 00:09:23.035 "seek_hole": false, 00:09:23.035 "seek_data": false, 00:09:23.035 "copy": true, 00:09:23.035 "nvme_iov_md": false 00:09:23.035 }, 00:09:23.035 "memory_domains": [ 00:09:23.035 { 00:09:23.035 "dma_device_id": "system", 00:09:23.035 "dma_device_type": 1 00:09:23.035 }, 00:09:23.035 { 00:09:23.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.035 "dma_device_type": 2 00:09:23.035 } 00:09:23.035 ], 00:09:23.035 "driver_specific": {} 00:09:23.035 } 00:09:23.035 ] 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.035 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.294 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.294 "name": "Existed_Raid", 00:09:23.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.294 "strip_size_kb": 64, 00:09:23.294 "state": "configuring", 00:09:23.294 "raid_level": "concat", 00:09:23.294 "superblock": false, 00:09:23.294 "num_base_bdevs": 3, 00:09:23.294 "num_base_bdevs_discovered": 2, 00:09:23.294 "num_base_bdevs_operational": 3, 00:09:23.294 "base_bdevs_list": [ 00:09:23.294 { 00:09:23.294 "name": "BaseBdev1", 00:09:23.294 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:23.294 "is_configured": true, 00:09:23.295 "data_offset": 0, 00:09:23.295 "data_size": 65536 00:09:23.295 }, 00:09:23.295 { 00:09:23.295 "name": "BaseBdev2", 00:09:23.295 "uuid": "c1add28f-606b-4b5d-bf98-bf11cb712bee", 00:09:23.295 "is_configured": true, 00:09:23.295 "data_offset": 0, 00:09:23.295 "data_size": 65536 00:09:23.295 }, 00:09:23.295 { 00:09:23.295 "name": "BaseBdev3", 00:09:23.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.295 "is_configured": false, 00:09:23.295 "data_offset": 0, 00:09:23.295 "data_size": 0 00:09:23.295 } 00:09:23.295 ] 00:09:23.295 }' 00:09:23.295 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.295 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 05:47:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.553 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.553 05:47:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 [2024-12-12 05:47:31.047100] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.553 [2024-12-12 05:47:31.047151] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:23.553 [2024-12-12 05:47:31.047180] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:23.553 [2024-12-12 05:47:31.047445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:23.553 [2024-12-12 05:47:31.047678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:23.553 [2024-12-12 05:47:31.047698] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:23.553 [2024-12-12 05:47:31.047973] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:23.553 BaseBdev3 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.553 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.553 [ 00:09:23.553 { 00:09:23.553 "name": "BaseBdev3", 00:09:23.553 "aliases": [ 00:09:23.553 "1fe5f223-bdd8-41d5-96f9-02d89c13ab6f" 00:09:23.553 ], 00:09:23.553 "product_name": "Malloc disk", 00:09:23.553 "block_size": 512, 00:09:23.553 "num_blocks": 65536, 00:09:23.553 "uuid": "1fe5f223-bdd8-41d5-96f9-02d89c13ab6f", 00:09:23.812 "assigned_rate_limits": { 00:09:23.812 "rw_ios_per_sec": 0, 00:09:23.812 "rw_mbytes_per_sec": 0, 00:09:23.812 "r_mbytes_per_sec": 0, 00:09:23.812 "w_mbytes_per_sec": 0 00:09:23.812 }, 00:09:23.812 "claimed": true, 00:09:23.812 "claim_type": "exclusive_write", 00:09:23.812 "zoned": false, 00:09:23.812 "supported_io_types": { 00:09:23.812 "read": true, 00:09:23.812 "write": true, 00:09:23.812 "unmap": true, 00:09:23.812 "flush": true, 00:09:23.812 "reset": true, 00:09:23.812 "nvme_admin": false, 00:09:23.812 "nvme_io": false, 00:09:23.812 "nvme_io_md": false, 00:09:23.812 "write_zeroes": true, 00:09:23.812 "zcopy": true, 00:09:23.812 "get_zone_info": false, 00:09:23.812 "zone_management": false, 00:09:23.812 "zone_append": false, 00:09:23.812 "compare": false, 00:09:23.812 "compare_and_write": false, 00:09:23.812 "abort": true, 00:09:23.812 "seek_hole": false, 00:09:23.812 "seek_data": false, 00:09:23.812 "copy": true, 00:09:23.812 "nvme_iov_md": false 00:09:23.812 }, 00:09:23.812 "memory_domains": [ 00:09:23.812 { 00:09:23.812 "dma_device_id": "system", 00:09:23.812 "dma_device_type": 1 00:09:23.812 }, 00:09:23.812 { 00:09:23.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.812 "dma_device_type": 2 00:09:23.812 } 00:09:23.812 ], 00:09:23.812 "driver_specific": {} 00:09:23.812 } 00:09:23.812 ] 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.812 "name": "Existed_Raid", 00:09:23.812 "uuid": "b11eeed8-ec1e-4344-8313-a58a7c10fede", 00:09:23.812 "strip_size_kb": 64, 00:09:23.812 "state": "online", 00:09:23.812 "raid_level": "concat", 00:09:23.812 "superblock": false, 00:09:23.812 "num_base_bdevs": 3, 00:09:23.812 "num_base_bdevs_discovered": 3, 00:09:23.812 "num_base_bdevs_operational": 3, 00:09:23.812 "base_bdevs_list": [ 00:09:23.812 { 00:09:23.812 "name": "BaseBdev1", 00:09:23.812 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 0, 00:09:23.812 "data_size": 65536 00:09:23.812 }, 00:09:23.812 { 00:09:23.812 "name": "BaseBdev2", 00:09:23.812 "uuid": "c1add28f-606b-4b5d-bf98-bf11cb712bee", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 0, 00:09:23.812 "data_size": 65536 00:09:23.812 }, 00:09:23.812 { 00:09:23.812 "name": "BaseBdev3", 00:09:23.812 "uuid": "1fe5f223-bdd8-41d5-96f9-02d89c13ab6f", 00:09:23.812 "is_configured": true, 00:09:23.812 "data_offset": 0, 00:09:23.812 "data_size": 65536 00:09:23.812 } 00:09:23.812 ] 00:09:23.812 }' 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.812 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:24.081 [2024-12-12 05:47:31.530651] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.081 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:24.081 "name": "Existed_Raid", 00:09:24.081 "aliases": [ 00:09:24.081 "b11eeed8-ec1e-4344-8313-a58a7c10fede" 00:09:24.081 ], 00:09:24.081 "product_name": "Raid Volume", 00:09:24.081 "block_size": 512, 00:09:24.081 "num_blocks": 196608, 00:09:24.081 "uuid": "b11eeed8-ec1e-4344-8313-a58a7c10fede", 00:09:24.081 "assigned_rate_limits": { 00:09:24.081 "rw_ios_per_sec": 0, 00:09:24.081 "rw_mbytes_per_sec": 0, 00:09:24.081 "r_mbytes_per_sec": 0, 00:09:24.081 "w_mbytes_per_sec": 0 00:09:24.081 }, 00:09:24.081 "claimed": false, 00:09:24.081 "zoned": false, 00:09:24.081 "supported_io_types": { 00:09:24.081 "read": true, 00:09:24.081 "write": true, 00:09:24.081 "unmap": true, 00:09:24.081 "flush": true, 00:09:24.081 "reset": true, 00:09:24.081 "nvme_admin": false, 00:09:24.081 "nvme_io": false, 00:09:24.081 "nvme_io_md": false, 00:09:24.081 "write_zeroes": true, 00:09:24.081 "zcopy": false, 00:09:24.081 "get_zone_info": false, 00:09:24.081 "zone_management": false, 00:09:24.081 "zone_append": false, 00:09:24.081 "compare": false, 00:09:24.081 "compare_and_write": false, 00:09:24.081 "abort": false, 00:09:24.081 "seek_hole": false, 00:09:24.081 "seek_data": false, 00:09:24.081 "copy": false, 00:09:24.081 "nvme_iov_md": false 00:09:24.081 }, 00:09:24.081 "memory_domains": [ 00:09:24.081 { 00:09:24.081 "dma_device_id": "system", 00:09:24.081 "dma_device_type": 1 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.081 "dma_device_type": 2 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "system", 00:09:24.081 "dma_device_type": 1 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.081 "dma_device_type": 2 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "system", 00:09:24.081 "dma_device_type": 1 00:09:24.081 }, 00:09:24.081 { 00:09:24.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.081 "dma_device_type": 2 00:09:24.081 } 00:09:24.081 ], 00:09:24.081 "driver_specific": { 00:09:24.081 "raid": { 00:09:24.081 "uuid": "b11eeed8-ec1e-4344-8313-a58a7c10fede", 00:09:24.081 "strip_size_kb": 64, 00:09:24.081 "state": "online", 00:09:24.081 "raid_level": "concat", 00:09:24.081 "superblock": false, 00:09:24.081 "num_base_bdevs": 3, 00:09:24.081 "num_base_bdevs_discovered": 3, 00:09:24.081 "num_base_bdevs_operational": 3, 00:09:24.081 "base_bdevs_list": [ 00:09:24.081 { 00:09:24.082 "name": "BaseBdev1", 00:09:24.082 "uuid": "a0092086-e544-42ce-9d0b-0d70766b9750", 00:09:24.082 "is_configured": true, 00:09:24.082 "data_offset": 0, 00:09:24.082 "data_size": 65536 00:09:24.082 }, 00:09:24.082 { 00:09:24.082 "name": "BaseBdev2", 00:09:24.082 "uuid": "c1add28f-606b-4b5d-bf98-bf11cb712bee", 00:09:24.082 "is_configured": true, 00:09:24.082 "data_offset": 0, 00:09:24.082 "data_size": 65536 00:09:24.082 }, 00:09:24.082 { 00:09:24.082 "name": "BaseBdev3", 00:09:24.082 "uuid": "1fe5f223-bdd8-41d5-96f9-02d89c13ab6f", 00:09:24.082 "is_configured": true, 00:09:24.082 "data_offset": 0, 00:09:24.082 "data_size": 65536 00:09:24.082 } 00:09:24.082 ] 00:09:24.082 } 00:09:24.082 } 00:09:24.082 }' 00:09:24.082 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:24.357 BaseBdev2 00:09:24.357 BaseBdev3' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.357 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.358 [2024-12-12 05:47:31.773992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.358 [2024-12-12 05:47:31.774026] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:24.358 [2024-12-12 05:47:31.774077] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.358 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.616 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.616 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.616 "name": "Existed_Raid", 00:09:24.616 "uuid": "b11eeed8-ec1e-4344-8313-a58a7c10fede", 00:09:24.616 "strip_size_kb": 64, 00:09:24.616 "state": "offline", 00:09:24.616 "raid_level": "concat", 00:09:24.616 "superblock": false, 00:09:24.616 "num_base_bdevs": 3, 00:09:24.616 "num_base_bdevs_discovered": 2, 00:09:24.616 "num_base_bdevs_operational": 2, 00:09:24.616 "base_bdevs_list": [ 00:09:24.616 { 00:09:24.616 "name": null, 00:09:24.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.616 "is_configured": false, 00:09:24.616 "data_offset": 0, 00:09:24.616 "data_size": 65536 00:09:24.616 }, 00:09:24.616 { 00:09:24.616 "name": "BaseBdev2", 00:09:24.616 "uuid": "c1add28f-606b-4b5d-bf98-bf11cb712bee", 00:09:24.616 "is_configured": true, 00:09:24.616 "data_offset": 0, 00:09:24.616 "data_size": 65536 00:09:24.616 }, 00:09:24.616 { 00:09:24.616 "name": "BaseBdev3", 00:09:24.616 "uuid": "1fe5f223-bdd8-41d5-96f9-02d89c13ab6f", 00:09:24.616 "is_configured": true, 00:09:24.616 "data_offset": 0, 00:09:24.616 "data_size": 65536 00:09:24.616 } 00:09:24.616 ] 00:09:24.616 }' 00:09:24.616 05:47:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.616 05:47:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.874 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.874 [2024-12-12 05:47:32.345994] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:25.133 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.134 [2024-12-12 05:47:32.498521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:25.134 [2024-12-12 05:47:32.498572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.134 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 BaseBdev2 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 [ 00:09:25.393 { 00:09:25.393 "name": "BaseBdev2", 00:09:25.393 "aliases": [ 00:09:25.393 "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e" 00:09:25.393 ], 00:09:25.393 "product_name": "Malloc disk", 00:09:25.393 "block_size": 512, 00:09:25.393 "num_blocks": 65536, 00:09:25.393 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:25.393 "assigned_rate_limits": { 00:09:25.393 "rw_ios_per_sec": 0, 00:09:25.393 "rw_mbytes_per_sec": 0, 00:09:25.393 "r_mbytes_per_sec": 0, 00:09:25.393 "w_mbytes_per_sec": 0 00:09:25.393 }, 00:09:25.393 "claimed": false, 00:09:25.393 "zoned": false, 00:09:25.393 "supported_io_types": { 00:09:25.393 "read": true, 00:09:25.393 "write": true, 00:09:25.393 "unmap": true, 00:09:25.393 "flush": true, 00:09:25.393 "reset": true, 00:09:25.393 "nvme_admin": false, 00:09:25.393 "nvme_io": false, 00:09:25.393 "nvme_io_md": false, 00:09:25.393 "write_zeroes": true, 00:09:25.393 "zcopy": true, 00:09:25.393 "get_zone_info": false, 00:09:25.393 "zone_management": false, 00:09:25.393 "zone_append": false, 00:09:25.393 "compare": false, 00:09:25.393 "compare_and_write": false, 00:09:25.393 "abort": true, 00:09:25.393 "seek_hole": false, 00:09:25.393 "seek_data": false, 00:09:25.393 "copy": true, 00:09:25.393 "nvme_iov_md": false 00:09:25.393 }, 00:09:25.393 "memory_domains": [ 00:09:25.393 { 00:09:25.393 "dma_device_id": "system", 00:09:25.393 "dma_device_type": 1 00:09:25.393 }, 00:09:25.393 { 00:09:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.393 "dma_device_type": 2 00:09:25.393 } 00:09:25.393 ], 00:09:25.393 "driver_specific": {} 00:09:25.393 } 00:09:25.393 ] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 BaseBdev3 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 [ 00:09:25.393 { 00:09:25.393 "name": "BaseBdev3", 00:09:25.393 "aliases": [ 00:09:25.393 "29048c18-edd1-4fdd-89cd-f166eb399112" 00:09:25.393 ], 00:09:25.393 "product_name": "Malloc disk", 00:09:25.393 "block_size": 512, 00:09:25.393 "num_blocks": 65536, 00:09:25.393 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:25.393 "assigned_rate_limits": { 00:09:25.393 "rw_ios_per_sec": 0, 00:09:25.393 "rw_mbytes_per_sec": 0, 00:09:25.393 "r_mbytes_per_sec": 0, 00:09:25.393 "w_mbytes_per_sec": 0 00:09:25.393 }, 00:09:25.393 "claimed": false, 00:09:25.393 "zoned": false, 00:09:25.393 "supported_io_types": { 00:09:25.393 "read": true, 00:09:25.393 "write": true, 00:09:25.393 "unmap": true, 00:09:25.393 "flush": true, 00:09:25.393 "reset": true, 00:09:25.393 "nvme_admin": false, 00:09:25.393 "nvme_io": false, 00:09:25.393 "nvme_io_md": false, 00:09:25.393 "write_zeroes": true, 00:09:25.393 "zcopy": true, 00:09:25.393 "get_zone_info": false, 00:09:25.393 "zone_management": false, 00:09:25.393 "zone_append": false, 00:09:25.393 "compare": false, 00:09:25.393 "compare_and_write": false, 00:09:25.393 "abort": true, 00:09:25.393 "seek_hole": false, 00:09:25.393 "seek_data": false, 00:09:25.393 "copy": true, 00:09:25.393 "nvme_iov_md": false 00:09:25.393 }, 00:09:25.393 "memory_domains": [ 00:09:25.393 { 00:09:25.393 "dma_device_id": "system", 00:09:25.393 "dma_device_type": 1 00:09:25.393 }, 00:09:25.393 { 00:09:25.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.393 "dma_device_type": 2 00:09:25.393 } 00:09:25.393 ], 00:09:25.393 "driver_specific": {} 00:09:25.393 } 00:09:25.393 ] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.393 [2024-12-12 05:47:32.807923] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:25.393 [2024-12-12 05:47:32.807971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:25.393 [2024-12-12 05:47:32.808008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.393 [2024-12-12 05:47:32.809843] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.393 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.394 "name": "Existed_Raid", 00:09:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.394 "strip_size_kb": 64, 00:09:25.394 "state": "configuring", 00:09:25.394 "raid_level": "concat", 00:09:25.394 "superblock": false, 00:09:25.394 "num_base_bdevs": 3, 00:09:25.394 "num_base_bdevs_discovered": 2, 00:09:25.394 "num_base_bdevs_operational": 3, 00:09:25.394 "base_bdevs_list": [ 00:09:25.394 { 00:09:25.394 "name": "BaseBdev1", 00:09:25.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.394 "is_configured": false, 00:09:25.394 "data_offset": 0, 00:09:25.394 "data_size": 0 00:09:25.394 }, 00:09:25.394 { 00:09:25.394 "name": "BaseBdev2", 00:09:25.394 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:25.394 "is_configured": true, 00:09:25.394 "data_offset": 0, 00:09:25.394 "data_size": 65536 00:09:25.394 }, 00:09:25.394 { 00:09:25.394 "name": "BaseBdev3", 00:09:25.394 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:25.394 "is_configured": true, 00:09:25.394 "data_offset": 0, 00:09:25.394 "data_size": 65536 00:09:25.394 } 00:09:25.394 ] 00:09:25.394 }' 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.394 05:47:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.959 [2024-12-12 05:47:33.275167] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.959 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.960 "name": "Existed_Raid", 00:09:25.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.960 "strip_size_kb": 64, 00:09:25.960 "state": "configuring", 00:09:25.960 "raid_level": "concat", 00:09:25.960 "superblock": false, 00:09:25.960 "num_base_bdevs": 3, 00:09:25.960 "num_base_bdevs_discovered": 1, 00:09:25.960 "num_base_bdevs_operational": 3, 00:09:25.960 "base_bdevs_list": [ 00:09:25.960 { 00:09:25.960 "name": "BaseBdev1", 00:09:25.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.960 "is_configured": false, 00:09:25.960 "data_offset": 0, 00:09:25.960 "data_size": 0 00:09:25.960 }, 00:09:25.960 { 00:09:25.960 "name": null, 00:09:25.960 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:25.960 "is_configured": false, 00:09:25.960 "data_offset": 0, 00:09:25.960 "data_size": 65536 00:09:25.960 }, 00:09:25.960 { 00:09:25.960 "name": "BaseBdev3", 00:09:25.960 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:25.960 "is_configured": true, 00:09:25.960 "data_offset": 0, 00:09:25.960 "data_size": 65536 00:09:25.960 } 00:09:25.960 ] 00:09:25.960 }' 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.960 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.218 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.218 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.218 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.218 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.218 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.477 [2024-12-12 05:47:33.803224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.477 BaseBdev1 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:26.477 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.478 [ 00:09:26.478 { 00:09:26.478 "name": "BaseBdev1", 00:09:26.478 "aliases": [ 00:09:26.478 "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0" 00:09:26.478 ], 00:09:26.478 "product_name": "Malloc disk", 00:09:26.478 "block_size": 512, 00:09:26.478 "num_blocks": 65536, 00:09:26.478 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:26.478 "assigned_rate_limits": { 00:09:26.478 "rw_ios_per_sec": 0, 00:09:26.478 "rw_mbytes_per_sec": 0, 00:09:26.478 "r_mbytes_per_sec": 0, 00:09:26.478 "w_mbytes_per_sec": 0 00:09:26.478 }, 00:09:26.478 "claimed": true, 00:09:26.478 "claim_type": "exclusive_write", 00:09:26.478 "zoned": false, 00:09:26.478 "supported_io_types": { 00:09:26.478 "read": true, 00:09:26.478 "write": true, 00:09:26.478 "unmap": true, 00:09:26.478 "flush": true, 00:09:26.478 "reset": true, 00:09:26.478 "nvme_admin": false, 00:09:26.478 "nvme_io": false, 00:09:26.478 "nvme_io_md": false, 00:09:26.478 "write_zeroes": true, 00:09:26.478 "zcopy": true, 00:09:26.478 "get_zone_info": false, 00:09:26.478 "zone_management": false, 00:09:26.478 "zone_append": false, 00:09:26.478 "compare": false, 00:09:26.478 "compare_and_write": false, 00:09:26.478 "abort": true, 00:09:26.478 "seek_hole": false, 00:09:26.478 "seek_data": false, 00:09:26.478 "copy": true, 00:09:26.478 "nvme_iov_md": false 00:09:26.478 }, 00:09:26.478 "memory_domains": [ 00:09:26.478 { 00:09:26.478 "dma_device_id": "system", 00:09:26.478 "dma_device_type": 1 00:09:26.478 }, 00:09:26.478 { 00:09:26.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.478 "dma_device_type": 2 00:09:26.478 } 00:09:26.478 ], 00:09:26.478 "driver_specific": {} 00:09:26.478 } 00:09:26.478 ] 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:26.478 "name": "Existed_Raid", 00:09:26.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:26.478 "strip_size_kb": 64, 00:09:26.478 "state": "configuring", 00:09:26.478 "raid_level": "concat", 00:09:26.478 "superblock": false, 00:09:26.478 "num_base_bdevs": 3, 00:09:26.478 "num_base_bdevs_discovered": 2, 00:09:26.478 "num_base_bdevs_operational": 3, 00:09:26.478 "base_bdevs_list": [ 00:09:26.478 { 00:09:26.478 "name": "BaseBdev1", 00:09:26.478 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:26.478 "is_configured": true, 00:09:26.478 "data_offset": 0, 00:09:26.478 "data_size": 65536 00:09:26.478 }, 00:09:26.478 { 00:09:26.478 "name": null, 00:09:26.478 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:26.478 "is_configured": false, 00:09:26.478 "data_offset": 0, 00:09:26.478 "data_size": 65536 00:09:26.478 }, 00:09:26.478 { 00:09:26.478 "name": "BaseBdev3", 00:09:26.478 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:26.478 "is_configured": true, 00:09:26.478 "data_offset": 0, 00:09:26.478 "data_size": 65536 00:09:26.478 } 00:09:26.478 ] 00:09:26.478 }' 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:26.478 05:47:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.045 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.045 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.046 [2024-12-12 05:47:34.350356] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.046 "name": "Existed_Raid", 00:09:27.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.046 "strip_size_kb": 64, 00:09:27.046 "state": "configuring", 00:09:27.046 "raid_level": "concat", 00:09:27.046 "superblock": false, 00:09:27.046 "num_base_bdevs": 3, 00:09:27.046 "num_base_bdevs_discovered": 1, 00:09:27.046 "num_base_bdevs_operational": 3, 00:09:27.046 "base_bdevs_list": [ 00:09:27.046 { 00:09:27.046 "name": "BaseBdev1", 00:09:27.046 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:27.046 "is_configured": true, 00:09:27.046 "data_offset": 0, 00:09:27.046 "data_size": 65536 00:09:27.046 }, 00:09:27.046 { 00:09:27.046 "name": null, 00:09:27.046 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:27.046 "is_configured": false, 00:09:27.046 "data_offset": 0, 00:09:27.046 "data_size": 65536 00:09:27.046 }, 00:09:27.046 { 00:09:27.046 "name": null, 00:09:27.046 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:27.046 "is_configured": false, 00:09:27.046 "data_offset": 0, 00:09:27.046 "data_size": 65536 00:09:27.046 } 00:09:27.046 ] 00:09:27.046 }' 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.046 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.613 [2024-12-12 05:47:34.885468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.613 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.614 "name": "Existed_Raid", 00:09:27.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.614 "strip_size_kb": 64, 00:09:27.614 "state": "configuring", 00:09:27.614 "raid_level": "concat", 00:09:27.614 "superblock": false, 00:09:27.614 "num_base_bdevs": 3, 00:09:27.614 "num_base_bdevs_discovered": 2, 00:09:27.614 "num_base_bdevs_operational": 3, 00:09:27.614 "base_bdevs_list": [ 00:09:27.614 { 00:09:27.614 "name": "BaseBdev1", 00:09:27.614 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:27.614 "is_configured": true, 00:09:27.614 "data_offset": 0, 00:09:27.614 "data_size": 65536 00:09:27.614 }, 00:09:27.614 { 00:09:27.614 "name": null, 00:09:27.614 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:27.614 "is_configured": false, 00:09:27.614 "data_offset": 0, 00:09:27.614 "data_size": 65536 00:09:27.614 }, 00:09:27.614 { 00:09:27.614 "name": "BaseBdev3", 00:09:27.614 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:27.614 "is_configured": true, 00:09:27.614 "data_offset": 0, 00:09:27.614 "data_size": 65536 00:09:27.614 } 00:09:27.614 ] 00:09:27.614 }' 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.614 05:47:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.872 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.872 [2024-12-12 05:47:35.384629] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:28.132 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.132 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.132 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.133 "name": "Existed_Raid", 00:09:28.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.133 "strip_size_kb": 64, 00:09:28.133 "state": "configuring", 00:09:28.133 "raid_level": "concat", 00:09:28.133 "superblock": false, 00:09:28.133 "num_base_bdevs": 3, 00:09:28.133 "num_base_bdevs_discovered": 1, 00:09:28.133 "num_base_bdevs_operational": 3, 00:09:28.133 "base_bdevs_list": [ 00:09:28.133 { 00:09:28.133 "name": null, 00:09:28.133 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:28.133 "is_configured": false, 00:09:28.133 "data_offset": 0, 00:09:28.133 "data_size": 65536 00:09:28.133 }, 00:09:28.133 { 00:09:28.133 "name": null, 00:09:28.133 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:28.133 "is_configured": false, 00:09:28.133 "data_offset": 0, 00:09:28.133 "data_size": 65536 00:09:28.133 }, 00:09:28.133 { 00:09:28.133 "name": "BaseBdev3", 00:09:28.133 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:28.133 "is_configured": true, 00:09:28.133 "data_offset": 0, 00:09:28.133 "data_size": 65536 00:09:28.133 } 00:09:28.133 ] 00:09:28.133 }' 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.133 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.700 [2024-12-12 05:47:35.965177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.700 05:47:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.700 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.700 "name": "Existed_Raid", 00:09:28.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.700 "strip_size_kb": 64, 00:09:28.700 "state": "configuring", 00:09:28.700 "raid_level": "concat", 00:09:28.700 "superblock": false, 00:09:28.700 "num_base_bdevs": 3, 00:09:28.700 "num_base_bdevs_discovered": 2, 00:09:28.700 "num_base_bdevs_operational": 3, 00:09:28.700 "base_bdevs_list": [ 00:09:28.700 { 00:09:28.700 "name": null, 00:09:28.700 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:28.700 "is_configured": false, 00:09:28.700 "data_offset": 0, 00:09:28.700 "data_size": 65536 00:09:28.700 }, 00:09:28.700 { 00:09:28.700 "name": "BaseBdev2", 00:09:28.700 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:28.700 "is_configured": true, 00:09:28.700 "data_offset": 0, 00:09:28.700 "data_size": 65536 00:09:28.700 }, 00:09:28.700 { 00:09:28.700 "name": "BaseBdev3", 00:09:28.700 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:28.700 "is_configured": true, 00:09:28.700 "data_offset": 0, 00:09:28.700 "data_size": 65536 00:09:28.700 } 00:09:28.700 ] 00:09:28.700 }' 00:09:28.700 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.700 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.959 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.217 [2024-12-12 05:47:36.516889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:29.218 [2024-12-12 05:47:36.516931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:29.218 [2024-12-12 05:47:36.516941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:29.218 [2024-12-12 05:47:36.517183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:29.218 [2024-12-12 05:47:36.517377] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:29.218 [2024-12-12 05:47:36.517395] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:29.218 [2024-12-12 05:47:36.517679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.218 NewBaseBdev 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 [ 00:09:29.218 { 00:09:29.218 "name": "NewBaseBdev", 00:09:29.218 "aliases": [ 00:09:29.218 "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0" 00:09:29.218 ], 00:09:29.218 "product_name": "Malloc disk", 00:09:29.218 "block_size": 512, 00:09:29.218 "num_blocks": 65536, 00:09:29.218 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:29.218 "assigned_rate_limits": { 00:09:29.218 "rw_ios_per_sec": 0, 00:09:29.218 "rw_mbytes_per_sec": 0, 00:09:29.218 "r_mbytes_per_sec": 0, 00:09:29.218 "w_mbytes_per_sec": 0 00:09:29.218 }, 00:09:29.218 "claimed": true, 00:09:29.218 "claim_type": "exclusive_write", 00:09:29.218 "zoned": false, 00:09:29.218 "supported_io_types": { 00:09:29.218 "read": true, 00:09:29.218 "write": true, 00:09:29.218 "unmap": true, 00:09:29.218 "flush": true, 00:09:29.218 "reset": true, 00:09:29.218 "nvme_admin": false, 00:09:29.218 "nvme_io": false, 00:09:29.218 "nvme_io_md": false, 00:09:29.218 "write_zeroes": true, 00:09:29.218 "zcopy": true, 00:09:29.218 "get_zone_info": false, 00:09:29.218 "zone_management": false, 00:09:29.218 "zone_append": false, 00:09:29.218 "compare": false, 00:09:29.218 "compare_and_write": false, 00:09:29.218 "abort": true, 00:09:29.218 "seek_hole": false, 00:09:29.218 "seek_data": false, 00:09:29.218 "copy": true, 00:09:29.218 "nvme_iov_md": false 00:09:29.218 }, 00:09:29.218 "memory_domains": [ 00:09:29.218 { 00:09:29.218 "dma_device_id": "system", 00:09:29.218 "dma_device_type": 1 00:09:29.218 }, 00:09:29.218 { 00:09:29.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.218 "dma_device_type": 2 00:09:29.218 } 00:09:29.218 ], 00:09:29.218 "driver_specific": {} 00:09:29.218 } 00:09:29.218 ] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.218 "name": "Existed_Raid", 00:09:29.218 "uuid": "83f1cc39-4a05-4fa6-9779-302e5bac1df4", 00:09:29.218 "strip_size_kb": 64, 00:09:29.218 "state": "online", 00:09:29.218 "raid_level": "concat", 00:09:29.218 "superblock": false, 00:09:29.218 "num_base_bdevs": 3, 00:09:29.218 "num_base_bdevs_discovered": 3, 00:09:29.218 "num_base_bdevs_operational": 3, 00:09:29.218 "base_bdevs_list": [ 00:09:29.218 { 00:09:29.218 "name": "NewBaseBdev", 00:09:29.218 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:29.218 "is_configured": true, 00:09:29.218 "data_offset": 0, 00:09:29.218 "data_size": 65536 00:09:29.218 }, 00:09:29.218 { 00:09:29.218 "name": "BaseBdev2", 00:09:29.218 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:29.218 "is_configured": true, 00:09:29.218 "data_offset": 0, 00:09:29.218 "data_size": 65536 00:09:29.218 }, 00:09:29.218 { 00:09:29.218 "name": "BaseBdev3", 00:09:29.218 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:29.218 "is_configured": true, 00:09:29.218 "data_offset": 0, 00:09:29.218 "data_size": 65536 00:09:29.218 } 00:09:29.218 ] 00:09:29.218 }' 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.218 05:47:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.786 05:47:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.786 [2024-12-12 05:47:37.008400] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.786 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.786 "name": "Existed_Raid", 00:09:29.786 "aliases": [ 00:09:29.786 "83f1cc39-4a05-4fa6-9779-302e5bac1df4" 00:09:29.786 ], 00:09:29.786 "product_name": "Raid Volume", 00:09:29.786 "block_size": 512, 00:09:29.786 "num_blocks": 196608, 00:09:29.786 "uuid": "83f1cc39-4a05-4fa6-9779-302e5bac1df4", 00:09:29.786 "assigned_rate_limits": { 00:09:29.786 "rw_ios_per_sec": 0, 00:09:29.786 "rw_mbytes_per_sec": 0, 00:09:29.786 "r_mbytes_per_sec": 0, 00:09:29.786 "w_mbytes_per_sec": 0 00:09:29.786 }, 00:09:29.786 "claimed": false, 00:09:29.786 "zoned": false, 00:09:29.786 "supported_io_types": { 00:09:29.786 "read": true, 00:09:29.786 "write": true, 00:09:29.786 "unmap": true, 00:09:29.786 "flush": true, 00:09:29.786 "reset": true, 00:09:29.786 "nvme_admin": false, 00:09:29.786 "nvme_io": false, 00:09:29.786 "nvme_io_md": false, 00:09:29.786 "write_zeroes": true, 00:09:29.786 "zcopy": false, 00:09:29.786 "get_zone_info": false, 00:09:29.786 "zone_management": false, 00:09:29.786 "zone_append": false, 00:09:29.786 "compare": false, 00:09:29.786 "compare_and_write": false, 00:09:29.786 "abort": false, 00:09:29.786 "seek_hole": false, 00:09:29.786 "seek_data": false, 00:09:29.786 "copy": false, 00:09:29.786 "nvme_iov_md": false 00:09:29.786 }, 00:09:29.786 "memory_domains": [ 00:09:29.786 { 00:09:29.786 "dma_device_id": "system", 00:09:29.786 "dma_device_type": 1 00:09:29.786 }, 00:09:29.786 { 00:09:29.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.786 "dma_device_type": 2 00:09:29.786 }, 00:09:29.786 { 00:09:29.786 "dma_device_id": "system", 00:09:29.786 "dma_device_type": 1 00:09:29.786 }, 00:09:29.786 { 00:09:29.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.786 "dma_device_type": 2 00:09:29.786 }, 00:09:29.786 { 00:09:29.786 "dma_device_id": "system", 00:09:29.786 "dma_device_type": 1 00:09:29.786 }, 00:09:29.786 { 00:09:29.786 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.786 "dma_device_type": 2 00:09:29.786 } 00:09:29.786 ], 00:09:29.786 "driver_specific": { 00:09:29.786 "raid": { 00:09:29.786 "uuid": "83f1cc39-4a05-4fa6-9779-302e5bac1df4", 00:09:29.787 "strip_size_kb": 64, 00:09:29.787 "state": "online", 00:09:29.787 "raid_level": "concat", 00:09:29.787 "superblock": false, 00:09:29.787 "num_base_bdevs": 3, 00:09:29.787 "num_base_bdevs_discovered": 3, 00:09:29.787 "num_base_bdevs_operational": 3, 00:09:29.787 "base_bdevs_list": [ 00:09:29.787 { 00:09:29.787 "name": "NewBaseBdev", 00:09:29.787 "uuid": "f3e5ab6f-b35c-45c6-bf17-1322a5ec0fe0", 00:09:29.787 "is_configured": true, 00:09:29.787 "data_offset": 0, 00:09:29.787 "data_size": 65536 00:09:29.787 }, 00:09:29.787 { 00:09:29.787 "name": "BaseBdev2", 00:09:29.787 "uuid": "3dcbbfa6-8725-4d82-9dee-4467f09b4a0e", 00:09:29.787 "is_configured": true, 00:09:29.787 "data_offset": 0, 00:09:29.787 "data_size": 65536 00:09:29.787 }, 00:09:29.787 { 00:09:29.787 "name": "BaseBdev3", 00:09:29.787 "uuid": "29048c18-edd1-4fdd-89cd-f166eb399112", 00:09:29.787 "is_configured": true, 00:09:29.787 "data_offset": 0, 00:09:29.787 "data_size": 65536 00:09:29.787 } 00:09:29.787 ] 00:09:29.787 } 00:09:29.787 } 00:09:29.787 }' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:29.787 BaseBdev2 00:09:29.787 BaseBdev3' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.787 [2024-12-12 05:47:37.279631] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:29.787 [2024-12-12 05:47:37.279660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.787 [2024-12-12 05:47:37.279730] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.787 [2024-12-12 05:47:37.279785] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.787 [2024-12-12 05:47:37.279797] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 66564 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 66564 ']' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 66564 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.787 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66564 00:09:30.045 killing process with pid 66564 00:09:30.045 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.045 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.045 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66564' 00:09:30.045 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 66564 00:09:30.045 [2024-12-12 05:47:37.327849] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.045 05:47:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 66564 00:09:30.304 [2024-12-12 05:47:37.625632] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.240 ************************************ 00:09:31.240 END TEST raid_state_function_test 00:09:31.240 ************************************ 00:09:31.240 05:47:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.240 00:09:31.240 real 0m10.562s 00:09:31.240 user 0m16.892s 00:09:31.240 sys 0m1.819s 00:09:31.240 05:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.240 05:47:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 05:47:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:31.501 05:47:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.501 05:47:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.501 05:47:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 ************************************ 00:09:31.501 START TEST raid_state_function_test_sb 00:09:31.501 ************************************ 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67187 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.501 Process raid pid: 67187 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67187' 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67187 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67187 ']' 00:09:31.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.501 05:47:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.501 [2024-12-12 05:47:38.889957] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:31.501 [2024-12-12 05:47:38.890145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.761 [2024-12-12 05:47:39.067335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.761 [2024-12-12 05:47:39.178380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.021 [2024-12-12 05:47:39.386290] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.021 [2024-12-12 05:47:39.386433] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.281 [2024-12-12 05:47:39.724336] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.281 [2024-12-12 05:47:39.724447] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.281 [2024-12-12 05:47:39.724477] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.281 [2024-12-12 05:47:39.724501] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.281 [2024-12-12 05:47:39.724539] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.281 [2024-12-12 05:47:39.724563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.281 "name": "Existed_Raid", 00:09:32.281 "uuid": "a3436cc3-c37b-4248-a57e-208afea771c2", 00:09:32.281 "strip_size_kb": 64, 00:09:32.281 "state": "configuring", 00:09:32.281 "raid_level": "concat", 00:09:32.281 "superblock": true, 00:09:32.281 "num_base_bdevs": 3, 00:09:32.281 "num_base_bdevs_discovered": 0, 00:09:32.281 "num_base_bdevs_operational": 3, 00:09:32.281 "base_bdevs_list": [ 00:09:32.281 { 00:09:32.281 "name": "BaseBdev1", 00:09:32.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.281 "is_configured": false, 00:09:32.281 "data_offset": 0, 00:09:32.281 "data_size": 0 00:09:32.281 }, 00:09:32.281 { 00:09:32.281 "name": "BaseBdev2", 00:09:32.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.281 "is_configured": false, 00:09:32.281 "data_offset": 0, 00:09:32.281 "data_size": 0 00:09:32.281 }, 00:09:32.281 { 00:09:32.281 "name": "BaseBdev3", 00:09:32.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.281 "is_configured": false, 00:09:32.281 "data_offset": 0, 00:09:32.281 "data_size": 0 00:09:32.281 } 00:09:32.281 ] 00:09:32.281 }' 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.281 05:47:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 [2024-12-12 05:47:40.179483] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:32.851 [2024-12-12 05:47:40.179596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 [2024-12-12 05:47:40.187477] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.851 [2024-12-12 05:47:40.187567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.851 [2024-12-12 05:47:40.187580] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.851 [2024-12-12 05:47:40.187590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.851 [2024-12-12 05:47:40.187596] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:32.851 [2024-12-12 05:47:40.187605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.851 [2024-12-12 05:47:40.233727] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:32.851 BaseBdev1 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.851 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.852 [ 00:09:32.852 { 00:09:32.852 "name": "BaseBdev1", 00:09:32.852 "aliases": [ 00:09:32.852 "66f0fdbd-f2cd-4766-a1a2-dae8628bf868" 00:09:32.852 ], 00:09:32.852 "product_name": "Malloc disk", 00:09:32.852 "block_size": 512, 00:09:32.852 "num_blocks": 65536, 00:09:32.852 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:32.852 "assigned_rate_limits": { 00:09:32.852 "rw_ios_per_sec": 0, 00:09:32.852 "rw_mbytes_per_sec": 0, 00:09:32.852 "r_mbytes_per_sec": 0, 00:09:32.852 "w_mbytes_per_sec": 0 00:09:32.852 }, 00:09:32.852 "claimed": true, 00:09:32.852 "claim_type": "exclusive_write", 00:09:32.852 "zoned": false, 00:09:32.852 "supported_io_types": { 00:09:32.852 "read": true, 00:09:32.852 "write": true, 00:09:32.852 "unmap": true, 00:09:32.852 "flush": true, 00:09:32.852 "reset": true, 00:09:32.852 "nvme_admin": false, 00:09:32.852 "nvme_io": false, 00:09:32.852 "nvme_io_md": false, 00:09:32.852 "write_zeroes": true, 00:09:32.852 "zcopy": true, 00:09:32.852 "get_zone_info": false, 00:09:32.852 "zone_management": false, 00:09:32.852 "zone_append": false, 00:09:32.852 "compare": false, 00:09:32.852 "compare_and_write": false, 00:09:32.852 "abort": true, 00:09:32.852 "seek_hole": false, 00:09:32.852 "seek_data": false, 00:09:32.852 "copy": true, 00:09:32.852 "nvme_iov_md": false 00:09:32.852 }, 00:09:32.852 "memory_domains": [ 00:09:32.852 { 00:09:32.852 "dma_device_id": "system", 00:09:32.852 "dma_device_type": 1 00:09:32.852 }, 00:09:32.852 { 00:09:32.852 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.852 "dma_device_type": 2 00:09:32.852 } 00:09:32.852 ], 00:09:32.852 "driver_specific": {} 00:09:32.852 } 00:09:32.852 ] 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.852 "name": "Existed_Raid", 00:09:32.852 "uuid": "b3549046-a1ea-474c-9322-40f6ed1a288b", 00:09:32.852 "strip_size_kb": 64, 00:09:32.852 "state": "configuring", 00:09:32.852 "raid_level": "concat", 00:09:32.852 "superblock": true, 00:09:32.852 "num_base_bdevs": 3, 00:09:32.852 "num_base_bdevs_discovered": 1, 00:09:32.852 "num_base_bdevs_operational": 3, 00:09:32.852 "base_bdevs_list": [ 00:09:32.852 { 00:09:32.852 "name": "BaseBdev1", 00:09:32.852 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:32.852 "is_configured": true, 00:09:32.852 "data_offset": 2048, 00:09:32.852 "data_size": 63488 00:09:32.852 }, 00:09:32.852 { 00:09:32.852 "name": "BaseBdev2", 00:09:32.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.852 "is_configured": false, 00:09:32.852 "data_offset": 0, 00:09:32.852 "data_size": 0 00:09:32.852 }, 00:09:32.852 { 00:09:32.852 "name": "BaseBdev3", 00:09:32.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.852 "is_configured": false, 00:09:32.852 "data_offset": 0, 00:09:32.852 "data_size": 0 00:09:32.852 } 00:09:32.852 ] 00:09:32.852 }' 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.852 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.458 [2024-12-12 05:47:40.673008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.458 [2024-12-12 05:47:40.673107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.458 [2024-12-12 05:47:40.685048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.458 [2024-12-12 05:47:40.686919] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.458 [2024-12-12 05:47:40.686954] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.458 [2024-12-12 05:47:40.686963] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:33.458 [2024-12-12 05:47:40.686973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.458 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.459 "name": "Existed_Raid", 00:09:33.459 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:33.459 "strip_size_kb": 64, 00:09:33.459 "state": "configuring", 00:09:33.459 "raid_level": "concat", 00:09:33.459 "superblock": true, 00:09:33.459 "num_base_bdevs": 3, 00:09:33.459 "num_base_bdevs_discovered": 1, 00:09:33.459 "num_base_bdevs_operational": 3, 00:09:33.459 "base_bdevs_list": [ 00:09:33.459 { 00:09:33.459 "name": "BaseBdev1", 00:09:33.459 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:33.459 "is_configured": true, 00:09:33.459 "data_offset": 2048, 00:09:33.459 "data_size": 63488 00:09:33.459 }, 00:09:33.459 { 00:09:33.459 "name": "BaseBdev2", 00:09:33.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.459 "is_configured": false, 00:09:33.459 "data_offset": 0, 00:09:33.459 "data_size": 0 00:09:33.459 }, 00:09:33.459 { 00:09:33.459 "name": "BaseBdev3", 00:09:33.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.459 "is_configured": false, 00:09:33.459 "data_offset": 0, 00:09:33.459 "data_size": 0 00:09:33.459 } 00:09:33.459 ] 00:09:33.459 }' 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.459 05:47:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.719 [2024-12-12 05:47:41.183306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:33.719 BaseBdev2 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.719 [ 00:09:33.719 { 00:09:33.719 "name": "BaseBdev2", 00:09:33.719 "aliases": [ 00:09:33.719 "270fd0cb-2396-477b-bddf-370e61445b85" 00:09:33.719 ], 00:09:33.719 "product_name": "Malloc disk", 00:09:33.719 "block_size": 512, 00:09:33.719 "num_blocks": 65536, 00:09:33.719 "uuid": "270fd0cb-2396-477b-bddf-370e61445b85", 00:09:33.719 "assigned_rate_limits": { 00:09:33.719 "rw_ios_per_sec": 0, 00:09:33.719 "rw_mbytes_per_sec": 0, 00:09:33.719 "r_mbytes_per_sec": 0, 00:09:33.719 "w_mbytes_per_sec": 0 00:09:33.719 }, 00:09:33.719 "claimed": true, 00:09:33.719 "claim_type": "exclusive_write", 00:09:33.719 "zoned": false, 00:09:33.719 "supported_io_types": { 00:09:33.719 "read": true, 00:09:33.719 "write": true, 00:09:33.719 "unmap": true, 00:09:33.719 "flush": true, 00:09:33.719 "reset": true, 00:09:33.719 "nvme_admin": false, 00:09:33.719 "nvme_io": false, 00:09:33.719 "nvme_io_md": false, 00:09:33.719 "write_zeroes": true, 00:09:33.719 "zcopy": true, 00:09:33.719 "get_zone_info": false, 00:09:33.719 "zone_management": false, 00:09:33.719 "zone_append": false, 00:09:33.719 "compare": false, 00:09:33.719 "compare_and_write": false, 00:09:33.719 "abort": true, 00:09:33.719 "seek_hole": false, 00:09:33.719 "seek_data": false, 00:09:33.719 "copy": true, 00:09:33.719 "nvme_iov_md": false 00:09:33.719 }, 00:09:33.719 "memory_domains": [ 00:09:33.719 { 00:09:33.719 "dma_device_id": "system", 00:09:33.719 "dma_device_type": 1 00:09:33.719 }, 00:09:33.719 { 00:09:33.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.719 "dma_device_type": 2 00:09:33.719 } 00:09:33.719 ], 00:09:33.719 "driver_specific": {} 00:09:33.719 } 00:09:33.719 ] 00:09:33.719 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.720 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.980 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.980 "name": "Existed_Raid", 00:09:33.980 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:33.980 "strip_size_kb": 64, 00:09:33.980 "state": "configuring", 00:09:33.980 "raid_level": "concat", 00:09:33.980 "superblock": true, 00:09:33.980 "num_base_bdevs": 3, 00:09:33.980 "num_base_bdevs_discovered": 2, 00:09:33.980 "num_base_bdevs_operational": 3, 00:09:33.980 "base_bdevs_list": [ 00:09:33.980 { 00:09:33.980 "name": "BaseBdev1", 00:09:33.980 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:33.980 "is_configured": true, 00:09:33.980 "data_offset": 2048, 00:09:33.980 "data_size": 63488 00:09:33.980 }, 00:09:33.980 { 00:09:33.980 "name": "BaseBdev2", 00:09:33.980 "uuid": "270fd0cb-2396-477b-bddf-370e61445b85", 00:09:33.980 "is_configured": true, 00:09:33.980 "data_offset": 2048, 00:09:33.980 "data_size": 63488 00:09:33.980 }, 00:09:33.980 { 00:09:33.980 "name": "BaseBdev3", 00:09:33.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.980 "is_configured": false, 00:09:33.980 "data_offset": 0, 00:09:33.980 "data_size": 0 00:09:33.980 } 00:09:33.980 ] 00:09:33.980 }' 00:09:33.980 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.980 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.240 [2024-12-12 05:47:41.707676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.240 [2024-12-12 05:47:41.707953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.240 [2024-12-12 05:47:41.707975] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.240 [2024-12-12 05:47:41.708238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.240 [2024-12-12 05:47:41.708397] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.240 [2024-12-12 05:47:41.708407] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.240 [2024-12-12 05:47:41.708565] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.240 BaseBdev3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.240 [ 00:09:34.240 { 00:09:34.240 "name": "BaseBdev3", 00:09:34.240 "aliases": [ 00:09:34.240 "2e592a94-fbe5-4317-b785-37f444b4e21a" 00:09:34.240 ], 00:09:34.240 "product_name": "Malloc disk", 00:09:34.240 "block_size": 512, 00:09:34.240 "num_blocks": 65536, 00:09:34.240 "uuid": "2e592a94-fbe5-4317-b785-37f444b4e21a", 00:09:34.240 "assigned_rate_limits": { 00:09:34.240 "rw_ios_per_sec": 0, 00:09:34.240 "rw_mbytes_per_sec": 0, 00:09:34.240 "r_mbytes_per_sec": 0, 00:09:34.240 "w_mbytes_per_sec": 0 00:09:34.240 }, 00:09:34.240 "claimed": true, 00:09:34.240 "claim_type": "exclusive_write", 00:09:34.240 "zoned": false, 00:09:34.240 "supported_io_types": { 00:09:34.240 "read": true, 00:09:34.240 "write": true, 00:09:34.240 "unmap": true, 00:09:34.240 "flush": true, 00:09:34.240 "reset": true, 00:09:34.240 "nvme_admin": false, 00:09:34.240 "nvme_io": false, 00:09:34.240 "nvme_io_md": false, 00:09:34.240 "write_zeroes": true, 00:09:34.240 "zcopy": true, 00:09:34.240 "get_zone_info": false, 00:09:34.240 "zone_management": false, 00:09:34.240 "zone_append": false, 00:09:34.240 "compare": false, 00:09:34.240 "compare_and_write": false, 00:09:34.240 "abort": true, 00:09:34.240 "seek_hole": false, 00:09:34.240 "seek_data": false, 00:09:34.240 "copy": true, 00:09:34.240 "nvme_iov_md": false 00:09:34.240 }, 00:09:34.240 "memory_domains": [ 00:09:34.240 { 00:09:34.240 "dma_device_id": "system", 00:09:34.240 "dma_device_type": 1 00:09:34.240 }, 00:09:34.240 { 00:09:34.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.240 "dma_device_type": 2 00:09:34.240 } 00:09:34.240 ], 00:09:34.240 "driver_specific": {} 00:09:34.240 } 00:09:34.240 ] 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.240 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.241 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.501 "name": "Existed_Raid", 00:09:34.501 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:34.501 "strip_size_kb": 64, 00:09:34.501 "state": "online", 00:09:34.501 "raid_level": "concat", 00:09:34.501 "superblock": true, 00:09:34.501 "num_base_bdevs": 3, 00:09:34.501 "num_base_bdevs_discovered": 3, 00:09:34.501 "num_base_bdevs_operational": 3, 00:09:34.501 "base_bdevs_list": [ 00:09:34.501 { 00:09:34.501 "name": "BaseBdev1", 00:09:34.501 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:34.501 "is_configured": true, 00:09:34.501 "data_offset": 2048, 00:09:34.501 "data_size": 63488 00:09:34.501 }, 00:09:34.501 { 00:09:34.501 "name": "BaseBdev2", 00:09:34.501 "uuid": "270fd0cb-2396-477b-bddf-370e61445b85", 00:09:34.501 "is_configured": true, 00:09:34.501 "data_offset": 2048, 00:09:34.501 "data_size": 63488 00:09:34.501 }, 00:09:34.501 { 00:09:34.501 "name": "BaseBdev3", 00:09:34.501 "uuid": "2e592a94-fbe5-4317-b785-37f444b4e21a", 00:09:34.501 "is_configured": true, 00:09:34.501 "data_offset": 2048, 00:09:34.501 "data_size": 63488 00:09:34.501 } 00:09:34.501 ] 00:09:34.501 }' 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.501 05:47:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.761 [2024-12-12 05:47:42.203166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:34.761 "name": "Existed_Raid", 00:09:34.761 "aliases": [ 00:09:34.761 "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc" 00:09:34.761 ], 00:09:34.761 "product_name": "Raid Volume", 00:09:34.761 "block_size": 512, 00:09:34.761 "num_blocks": 190464, 00:09:34.761 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:34.761 "assigned_rate_limits": { 00:09:34.761 "rw_ios_per_sec": 0, 00:09:34.761 "rw_mbytes_per_sec": 0, 00:09:34.761 "r_mbytes_per_sec": 0, 00:09:34.761 "w_mbytes_per_sec": 0 00:09:34.761 }, 00:09:34.761 "claimed": false, 00:09:34.761 "zoned": false, 00:09:34.761 "supported_io_types": { 00:09:34.761 "read": true, 00:09:34.761 "write": true, 00:09:34.761 "unmap": true, 00:09:34.761 "flush": true, 00:09:34.761 "reset": true, 00:09:34.761 "nvme_admin": false, 00:09:34.761 "nvme_io": false, 00:09:34.761 "nvme_io_md": false, 00:09:34.761 "write_zeroes": true, 00:09:34.761 "zcopy": false, 00:09:34.761 "get_zone_info": false, 00:09:34.761 "zone_management": false, 00:09:34.761 "zone_append": false, 00:09:34.761 "compare": false, 00:09:34.761 "compare_and_write": false, 00:09:34.761 "abort": false, 00:09:34.761 "seek_hole": false, 00:09:34.761 "seek_data": false, 00:09:34.761 "copy": false, 00:09:34.761 "nvme_iov_md": false 00:09:34.761 }, 00:09:34.761 "memory_domains": [ 00:09:34.761 { 00:09:34.761 "dma_device_id": "system", 00:09:34.761 "dma_device_type": 1 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.761 "dma_device_type": 2 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "dma_device_id": "system", 00:09:34.761 "dma_device_type": 1 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.761 "dma_device_type": 2 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "dma_device_id": "system", 00:09:34.761 "dma_device_type": 1 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.761 "dma_device_type": 2 00:09:34.761 } 00:09:34.761 ], 00:09:34.761 "driver_specific": { 00:09:34.761 "raid": { 00:09:34.761 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:34.761 "strip_size_kb": 64, 00:09:34.761 "state": "online", 00:09:34.761 "raid_level": "concat", 00:09:34.761 "superblock": true, 00:09:34.761 "num_base_bdevs": 3, 00:09:34.761 "num_base_bdevs_discovered": 3, 00:09:34.761 "num_base_bdevs_operational": 3, 00:09:34.761 "base_bdevs_list": [ 00:09:34.761 { 00:09:34.761 "name": "BaseBdev1", 00:09:34.761 "uuid": "66f0fdbd-f2cd-4766-a1a2-dae8628bf868", 00:09:34.761 "is_configured": true, 00:09:34.761 "data_offset": 2048, 00:09:34.761 "data_size": 63488 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "name": "BaseBdev2", 00:09:34.761 "uuid": "270fd0cb-2396-477b-bddf-370e61445b85", 00:09:34.761 "is_configured": true, 00:09:34.761 "data_offset": 2048, 00:09:34.761 "data_size": 63488 00:09:34.761 }, 00:09:34.761 { 00:09:34.761 "name": "BaseBdev3", 00:09:34.761 "uuid": "2e592a94-fbe5-4317-b785-37f444b4e21a", 00:09:34.761 "is_configured": true, 00:09:34.761 "data_offset": 2048, 00:09:34.761 "data_size": 63488 00:09:34.761 } 00:09:34.761 ] 00:09:34.761 } 00:09:34.761 } 00:09:34.761 }' 00:09:34.761 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.021 BaseBdev2 00:09:35.021 BaseBdev3' 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.021 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.022 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.022 [2024-12-12 05:47:42.466492] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.022 [2024-12-12 05:47:42.466580] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.022 [2024-12-12 05:47:42.466641] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.282 "name": "Existed_Raid", 00:09:35.282 "uuid": "f7c5eda4-8093-415f-ab2a-e40bc77b3cbc", 00:09:35.282 "strip_size_kb": 64, 00:09:35.282 "state": "offline", 00:09:35.282 "raid_level": "concat", 00:09:35.282 "superblock": true, 00:09:35.282 "num_base_bdevs": 3, 00:09:35.282 "num_base_bdevs_discovered": 2, 00:09:35.282 "num_base_bdevs_operational": 2, 00:09:35.282 "base_bdevs_list": [ 00:09:35.282 { 00:09:35.282 "name": null, 00:09:35.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.282 "is_configured": false, 00:09:35.282 "data_offset": 0, 00:09:35.282 "data_size": 63488 00:09:35.282 }, 00:09:35.282 { 00:09:35.282 "name": "BaseBdev2", 00:09:35.282 "uuid": "270fd0cb-2396-477b-bddf-370e61445b85", 00:09:35.282 "is_configured": true, 00:09:35.282 "data_offset": 2048, 00:09:35.282 "data_size": 63488 00:09:35.282 }, 00:09:35.282 { 00:09:35.282 "name": "BaseBdev3", 00:09:35.282 "uuid": "2e592a94-fbe5-4317-b785-37f444b4e21a", 00:09:35.282 "is_configured": true, 00:09:35.282 "data_offset": 2048, 00:09:35.282 "data_size": 63488 00:09:35.282 } 00:09:35.282 ] 00:09:35.282 }' 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.282 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.542 05:47:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.542 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.542 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.542 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.542 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.542 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.542 [2024-12-12 05:47:43.028802] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.802 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.802 [2024-12-12 05:47:43.183103] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:35.803 [2024-12-12 05:47:43.183157] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.803 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.063 BaseBdev2 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.063 [ 00:09:36.063 { 00:09:36.063 "name": "BaseBdev2", 00:09:36.063 "aliases": [ 00:09:36.063 "86a52240-6749-48cd-b49c-522354b6f372" 00:09:36.063 ], 00:09:36.063 "product_name": "Malloc disk", 00:09:36.063 "block_size": 512, 00:09:36.063 "num_blocks": 65536, 00:09:36.063 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:36.063 "assigned_rate_limits": { 00:09:36.063 "rw_ios_per_sec": 0, 00:09:36.063 "rw_mbytes_per_sec": 0, 00:09:36.063 "r_mbytes_per_sec": 0, 00:09:36.063 "w_mbytes_per_sec": 0 00:09:36.063 }, 00:09:36.063 "claimed": false, 00:09:36.063 "zoned": false, 00:09:36.063 "supported_io_types": { 00:09:36.063 "read": true, 00:09:36.063 "write": true, 00:09:36.063 "unmap": true, 00:09:36.063 "flush": true, 00:09:36.063 "reset": true, 00:09:36.063 "nvme_admin": false, 00:09:36.063 "nvme_io": false, 00:09:36.063 "nvme_io_md": false, 00:09:36.063 "write_zeroes": true, 00:09:36.063 "zcopy": true, 00:09:36.063 "get_zone_info": false, 00:09:36.063 "zone_management": false, 00:09:36.063 "zone_append": false, 00:09:36.063 "compare": false, 00:09:36.063 "compare_and_write": false, 00:09:36.063 "abort": true, 00:09:36.063 "seek_hole": false, 00:09:36.063 "seek_data": false, 00:09:36.063 "copy": true, 00:09:36.063 "nvme_iov_md": false 00:09:36.063 }, 00:09:36.063 "memory_domains": [ 00:09:36.063 { 00:09:36.063 "dma_device_id": "system", 00:09:36.063 "dma_device_type": 1 00:09:36.063 }, 00:09:36.063 { 00:09:36.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.063 "dma_device_type": 2 00:09:36.063 } 00:09:36.063 ], 00:09:36.063 "driver_specific": {} 00:09:36.063 } 00:09:36.063 ] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.063 BaseBdev3 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.063 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 [ 00:09:36.064 { 00:09:36.064 "name": "BaseBdev3", 00:09:36.064 "aliases": [ 00:09:36.064 "88598972-167f-4a74-9aa6-e3fc5787dfa7" 00:09:36.064 ], 00:09:36.064 "product_name": "Malloc disk", 00:09:36.064 "block_size": 512, 00:09:36.064 "num_blocks": 65536, 00:09:36.064 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:36.064 "assigned_rate_limits": { 00:09:36.064 "rw_ios_per_sec": 0, 00:09:36.064 "rw_mbytes_per_sec": 0, 00:09:36.064 "r_mbytes_per_sec": 0, 00:09:36.064 "w_mbytes_per_sec": 0 00:09:36.064 }, 00:09:36.064 "claimed": false, 00:09:36.064 "zoned": false, 00:09:36.064 "supported_io_types": { 00:09:36.064 "read": true, 00:09:36.064 "write": true, 00:09:36.064 "unmap": true, 00:09:36.064 "flush": true, 00:09:36.064 "reset": true, 00:09:36.064 "nvme_admin": false, 00:09:36.064 "nvme_io": false, 00:09:36.064 "nvme_io_md": false, 00:09:36.064 "write_zeroes": true, 00:09:36.064 "zcopy": true, 00:09:36.064 "get_zone_info": false, 00:09:36.064 "zone_management": false, 00:09:36.064 "zone_append": false, 00:09:36.064 "compare": false, 00:09:36.064 "compare_and_write": false, 00:09:36.064 "abort": true, 00:09:36.064 "seek_hole": false, 00:09:36.064 "seek_data": false, 00:09:36.064 "copy": true, 00:09:36.064 "nvme_iov_md": false 00:09:36.064 }, 00:09:36.064 "memory_domains": [ 00:09:36.064 { 00:09:36.064 "dma_device_id": "system", 00:09:36.064 "dma_device_type": 1 00:09:36.064 }, 00:09:36.064 { 00:09:36.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.064 "dma_device_type": 2 00:09:36.064 } 00:09:36.064 ], 00:09:36.064 "driver_specific": {} 00:09:36.064 } 00:09:36.064 ] 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 [2024-12-12 05:47:43.488197] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.064 [2024-12-12 05:47:43.488294] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.064 [2024-12-12 05:47:43.488334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.064 [2024-12-12 05:47:43.490057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.064 "name": "Existed_Raid", 00:09:36.064 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:36.064 "strip_size_kb": 64, 00:09:36.064 "state": "configuring", 00:09:36.064 "raid_level": "concat", 00:09:36.064 "superblock": true, 00:09:36.064 "num_base_bdevs": 3, 00:09:36.064 "num_base_bdevs_discovered": 2, 00:09:36.064 "num_base_bdevs_operational": 3, 00:09:36.064 "base_bdevs_list": [ 00:09:36.064 { 00:09:36.064 "name": "BaseBdev1", 00:09:36.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.064 "is_configured": false, 00:09:36.064 "data_offset": 0, 00:09:36.064 "data_size": 0 00:09:36.064 }, 00:09:36.064 { 00:09:36.064 "name": "BaseBdev2", 00:09:36.064 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:36.064 "is_configured": true, 00:09:36.064 "data_offset": 2048, 00:09:36.064 "data_size": 63488 00:09:36.064 }, 00:09:36.064 { 00:09:36.064 "name": "BaseBdev3", 00:09:36.064 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:36.064 "is_configured": true, 00:09:36.064 "data_offset": 2048, 00:09:36.064 "data_size": 63488 00:09:36.064 } 00:09:36.064 ] 00:09:36.064 }' 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.064 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.634 [2024-12-12 05:47:43.915472] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.634 "name": "Existed_Raid", 00:09:36.634 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:36.634 "strip_size_kb": 64, 00:09:36.634 "state": "configuring", 00:09:36.634 "raid_level": "concat", 00:09:36.634 "superblock": true, 00:09:36.634 "num_base_bdevs": 3, 00:09:36.634 "num_base_bdevs_discovered": 1, 00:09:36.634 "num_base_bdevs_operational": 3, 00:09:36.634 "base_bdevs_list": [ 00:09:36.634 { 00:09:36.634 "name": "BaseBdev1", 00:09:36.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:36.634 "is_configured": false, 00:09:36.634 "data_offset": 0, 00:09:36.634 "data_size": 0 00:09:36.634 }, 00:09:36.634 { 00:09:36.634 "name": null, 00:09:36.634 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:36.634 "is_configured": false, 00:09:36.634 "data_offset": 0, 00:09:36.634 "data_size": 63488 00:09:36.634 }, 00:09:36.634 { 00:09:36.634 "name": "BaseBdev3", 00:09:36.634 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:36.634 "is_configured": true, 00:09:36.634 "data_offset": 2048, 00:09:36.634 "data_size": 63488 00:09:36.634 } 00:09:36.634 ] 00:09:36.634 }' 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.634 05:47:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:36.894 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.154 [2024-12-12 05:47:44.449433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:37.154 BaseBdev1 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.154 [ 00:09:37.154 { 00:09:37.154 "name": "BaseBdev1", 00:09:37.154 "aliases": [ 00:09:37.154 "fa58a2af-2486-434d-9266-e5ece0fb2df9" 00:09:37.154 ], 00:09:37.154 "product_name": "Malloc disk", 00:09:37.154 "block_size": 512, 00:09:37.154 "num_blocks": 65536, 00:09:37.154 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:37.154 "assigned_rate_limits": { 00:09:37.154 "rw_ios_per_sec": 0, 00:09:37.154 "rw_mbytes_per_sec": 0, 00:09:37.154 "r_mbytes_per_sec": 0, 00:09:37.154 "w_mbytes_per_sec": 0 00:09:37.154 }, 00:09:37.154 "claimed": true, 00:09:37.154 "claim_type": "exclusive_write", 00:09:37.154 "zoned": false, 00:09:37.154 "supported_io_types": { 00:09:37.154 "read": true, 00:09:37.154 "write": true, 00:09:37.154 "unmap": true, 00:09:37.154 "flush": true, 00:09:37.154 "reset": true, 00:09:37.154 "nvme_admin": false, 00:09:37.154 "nvme_io": false, 00:09:37.154 "nvme_io_md": false, 00:09:37.154 "write_zeroes": true, 00:09:37.154 "zcopy": true, 00:09:37.154 "get_zone_info": false, 00:09:37.154 "zone_management": false, 00:09:37.154 "zone_append": false, 00:09:37.154 "compare": false, 00:09:37.154 "compare_and_write": false, 00:09:37.154 "abort": true, 00:09:37.154 "seek_hole": false, 00:09:37.154 "seek_data": false, 00:09:37.154 "copy": true, 00:09:37.154 "nvme_iov_md": false 00:09:37.154 }, 00:09:37.154 "memory_domains": [ 00:09:37.154 { 00:09:37.154 "dma_device_id": "system", 00:09:37.154 "dma_device_type": 1 00:09:37.154 }, 00:09:37.154 { 00:09:37.154 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:37.154 "dma_device_type": 2 00:09:37.154 } 00:09:37.154 ], 00:09:37.154 "driver_specific": {} 00:09:37.154 } 00:09:37.154 ] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.154 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.154 "name": "Existed_Raid", 00:09:37.154 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:37.154 "strip_size_kb": 64, 00:09:37.154 "state": "configuring", 00:09:37.154 "raid_level": "concat", 00:09:37.154 "superblock": true, 00:09:37.154 "num_base_bdevs": 3, 00:09:37.154 "num_base_bdevs_discovered": 2, 00:09:37.154 "num_base_bdevs_operational": 3, 00:09:37.154 "base_bdevs_list": [ 00:09:37.154 { 00:09:37.154 "name": "BaseBdev1", 00:09:37.154 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:37.154 "is_configured": true, 00:09:37.154 "data_offset": 2048, 00:09:37.154 "data_size": 63488 00:09:37.154 }, 00:09:37.154 { 00:09:37.154 "name": null, 00:09:37.154 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:37.154 "is_configured": false, 00:09:37.154 "data_offset": 0, 00:09:37.154 "data_size": 63488 00:09:37.154 }, 00:09:37.154 { 00:09:37.154 "name": "BaseBdev3", 00:09:37.155 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:37.155 "is_configured": true, 00:09:37.155 "data_offset": 2048, 00:09:37.155 "data_size": 63488 00:09:37.155 } 00:09:37.155 ] 00:09:37.155 }' 00:09:37.155 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.155 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.415 [2024-12-12 05:47:44.924683] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.415 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.674 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.674 "name": "Existed_Raid", 00:09:37.674 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:37.674 "strip_size_kb": 64, 00:09:37.674 "state": "configuring", 00:09:37.674 "raid_level": "concat", 00:09:37.674 "superblock": true, 00:09:37.674 "num_base_bdevs": 3, 00:09:37.675 "num_base_bdevs_discovered": 1, 00:09:37.675 "num_base_bdevs_operational": 3, 00:09:37.675 "base_bdevs_list": [ 00:09:37.675 { 00:09:37.675 "name": "BaseBdev1", 00:09:37.675 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:37.675 "is_configured": true, 00:09:37.675 "data_offset": 2048, 00:09:37.675 "data_size": 63488 00:09:37.675 }, 00:09:37.675 { 00:09:37.675 "name": null, 00:09:37.675 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:37.675 "is_configured": false, 00:09:37.675 "data_offset": 0, 00:09:37.675 "data_size": 63488 00:09:37.675 }, 00:09:37.675 { 00:09:37.675 "name": null, 00:09:37.675 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:37.675 "is_configured": false, 00:09:37.675 "data_offset": 0, 00:09:37.675 "data_size": 63488 00:09:37.675 } 00:09:37.675 ] 00:09:37.675 }' 00:09:37.675 05:47:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.675 05:47:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.934 [2024-12-12 05:47:45.375940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:37.934 "name": "Existed_Raid", 00:09:37.934 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:37.934 "strip_size_kb": 64, 00:09:37.934 "state": "configuring", 00:09:37.934 "raid_level": "concat", 00:09:37.934 "superblock": true, 00:09:37.934 "num_base_bdevs": 3, 00:09:37.934 "num_base_bdevs_discovered": 2, 00:09:37.934 "num_base_bdevs_operational": 3, 00:09:37.934 "base_bdevs_list": [ 00:09:37.934 { 00:09:37.934 "name": "BaseBdev1", 00:09:37.934 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:37.934 "is_configured": true, 00:09:37.934 "data_offset": 2048, 00:09:37.934 "data_size": 63488 00:09:37.934 }, 00:09:37.934 { 00:09:37.934 "name": null, 00:09:37.934 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:37.934 "is_configured": false, 00:09:37.934 "data_offset": 0, 00:09:37.934 "data_size": 63488 00:09:37.934 }, 00:09:37.934 { 00:09:37.934 "name": "BaseBdev3", 00:09:37.934 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:37.934 "is_configured": true, 00:09:37.934 "data_offset": 2048, 00:09:37.934 "data_size": 63488 00:09:37.934 } 00:09:37.934 ] 00:09:37.934 }' 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:37.934 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.503 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.503 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.503 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.504 [2024-12-12 05:47:45.839196] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.504 "name": "Existed_Raid", 00:09:38.504 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:38.504 "strip_size_kb": 64, 00:09:38.504 "state": "configuring", 00:09:38.504 "raid_level": "concat", 00:09:38.504 "superblock": true, 00:09:38.504 "num_base_bdevs": 3, 00:09:38.504 "num_base_bdevs_discovered": 1, 00:09:38.504 "num_base_bdevs_operational": 3, 00:09:38.504 "base_bdevs_list": [ 00:09:38.504 { 00:09:38.504 "name": null, 00:09:38.504 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:38.504 "is_configured": false, 00:09:38.504 "data_offset": 0, 00:09:38.504 "data_size": 63488 00:09:38.504 }, 00:09:38.504 { 00:09:38.504 "name": null, 00:09:38.504 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:38.504 "is_configured": false, 00:09:38.504 "data_offset": 0, 00:09:38.504 "data_size": 63488 00:09:38.504 }, 00:09:38.504 { 00:09:38.504 "name": "BaseBdev3", 00:09:38.504 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:38.504 "is_configured": true, 00:09:38.504 "data_offset": 2048, 00:09:38.504 "data_size": 63488 00:09:38.504 } 00:09:38.504 ] 00:09:38.504 }' 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.504 05:47:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.074 [2024-12-12 05:47:46.407116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.074 "name": "Existed_Raid", 00:09:39.074 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:39.074 "strip_size_kb": 64, 00:09:39.074 "state": "configuring", 00:09:39.074 "raid_level": "concat", 00:09:39.074 "superblock": true, 00:09:39.074 "num_base_bdevs": 3, 00:09:39.074 "num_base_bdevs_discovered": 2, 00:09:39.074 "num_base_bdevs_operational": 3, 00:09:39.074 "base_bdevs_list": [ 00:09:39.074 { 00:09:39.074 "name": null, 00:09:39.074 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:39.074 "is_configured": false, 00:09:39.074 "data_offset": 0, 00:09:39.074 "data_size": 63488 00:09:39.074 }, 00:09:39.074 { 00:09:39.074 "name": "BaseBdev2", 00:09:39.074 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:39.074 "is_configured": true, 00:09:39.074 "data_offset": 2048, 00:09:39.074 "data_size": 63488 00:09:39.074 }, 00:09:39.074 { 00:09:39.074 "name": "BaseBdev3", 00:09:39.074 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:39.074 "is_configured": true, 00:09:39.074 "data_offset": 2048, 00:09:39.074 "data_size": 63488 00:09:39.074 } 00:09:39.074 ] 00:09:39.074 }' 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.074 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.334 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.334 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.334 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.334 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:39.334 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u fa58a2af-2486-434d-9266-e5ece0fb2df9 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 [2024-12-12 05:47:46.950384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:39.594 [2024-12-12 05:47:46.950711] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.594 [2024-12-12 05:47:46.950768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.594 [2024-12-12 05:47:46.951071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:39.594 NewBaseBdev 00:09:39.594 [2024-12-12 05:47:46.951265] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.594 [2024-12-12 05:47:46.951278] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:39.594 [2024-12-12 05:47:46.951432] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.594 [ 00:09:39.594 { 00:09:39.594 "name": "NewBaseBdev", 00:09:39.594 "aliases": [ 00:09:39.594 "fa58a2af-2486-434d-9266-e5ece0fb2df9" 00:09:39.594 ], 00:09:39.594 "product_name": "Malloc disk", 00:09:39.594 "block_size": 512, 00:09:39.594 "num_blocks": 65536, 00:09:39.594 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:39.594 "assigned_rate_limits": { 00:09:39.594 "rw_ios_per_sec": 0, 00:09:39.594 "rw_mbytes_per_sec": 0, 00:09:39.594 "r_mbytes_per_sec": 0, 00:09:39.594 "w_mbytes_per_sec": 0 00:09:39.594 }, 00:09:39.594 "claimed": true, 00:09:39.594 "claim_type": "exclusive_write", 00:09:39.594 "zoned": false, 00:09:39.594 "supported_io_types": { 00:09:39.594 "read": true, 00:09:39.594 "write": true, 00:09:39.594 "unmap": true, 00:09:39.594 "flush": true, 00:09:39.594 "reset": true, 00:09:39.594 "nvme_admin": false, 00:09:39.594 "nvme_io": false, 00:09:39.594 "nvme_io_md": false, 00:09:39.594 "write_zeroes": true, 00:09:39.594 "zcopy": true, 00:09:39.594 "get_zone_info": false, 00:09:39.594 "zone_management": false, 00:09:39.594 "zone_append": false, 00:09:39.594 "compare": false, 00:09:39.594 "compare_and_write": false, 00:09:39.594 "abort": true, 00:09:39.594 "seek_hole": false, 00:09:39.594 "seek_data": false, 00:09:39.594 "copy": true, 00:09:39.594 "nvme_iov_md": false 00:09:39.594 }, 00:09:39.594 "memory_domains": [ 00:09:39.594 { 00:09:39.594 "dma_device_id": "system", 00:09:39.594 "dma_device_type": 1 00:09:39.594 }, 00:09:39.594 { 00:09:39.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:39.594 "dma_device_type": 2 00:09:39.594 } 00:09:39.594 ], 00:09:39.594 "driver_specific": {} 00:09:39.594 } 00:09:39.594 ] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.594 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.595 05:47:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.595 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.595 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.595 "name": "Existed_Raid", 00:09:39.595 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:39.595 "strip_size_kb": 64, 00:09:39.595 "state": "online", 00:09:39.595 "raid_level": "concat", 00:09:39.595 "superblock": true, 00:09:39.595 "num_base_bdevs": 3, 00:09:39.595 "num_base_bdevs_discovered": 3, 00:09:39.595 "num_base_bdevs_operational": 3, 00:09:39.595 "base_bdevs_list": [ 00:09:39.595 { 00:09:39.595 "name": "NewBaseBdev", 00:09:39.595 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:39.595 "is_configured": true, 00:09:39.595 "data_offset": 2048, 00:09:39.595 "data_size": 63488 00:09:39.595 }, 00:09:39.595 { 00:09:39.595 "name": "BaseBdev2", 00:09:39.595 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:39.595 "is_configured": true, 00:09:39.595 "data_offset": 2048, 00:09:39.595 "data_size": 63488 00:09:39.595 }, 00:09:39.595 { 00:09:39.595 "name": "BaseBdev3", 00:09:39.595 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:39.595 "is_configured": true, 00:09:39.595 "data_offset": 2048, 00:09:39.595 "data_size": 63488 00:09:39.595 } 00:09:39.595 ] 00:09:39.595 }' 00:09:39.595 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.595 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 [2024-12-12 05:47:47.405993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.165 "name": "Existed_Raid", 00:09:40.165 "aliases": [ 00:09:40.165 "c25d26c4-9c8d-4b02-960b-1d6f247feda2" 00:09:40.165 ], 00:09:40.165 "product_name": "Raid Volume", 00:09:40.165 "block_size": 512, 00:09:40.165 "num_blocks": 190464, 00:09:40.165 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:40.165 "assigned_rate_limits": { 00:09:40.165 "rw_ios_per_sec": 0, 00:09:40.165 "rw_mbytes_per_sec": 0, 00:09:40.165 "r_mbytes_per_sec": 0, 00:09:40.165 "w_mbytes_per_sec": 0 00:09:40.165 }, 00:09:40.165 "claimed": false, 00:09:40.165 "zoned": false, 00:09:40.165 "supported_io_types": { 00:09:40.165 "read": true, 00:09:40.165 "write": true, 00:09:40.165 "unmap": true, 00:09:40.165 "flush": true, 00:09:40.165 "reset": true, 00:09:40.165 "nvme_admin": false, 00:09:40.165 "nvme_io": false, 00:09:40.165 "nvme_io_md": false, 00:09:40.165 "write_zeroes": true, 00:09:40.165 "zcopy": false, 00:09:40.165 "get_zone_info": false, 00:09:40.165 "zone_management": false, 00:09:40.165 "zone_append": false, 00:09:40.165 "compare": false, 00:09:40.165 "compare_and_write": false, 00:09:40.165 "abort": false, 00:09:40.165 "seek_hole": false, 00:09:40.165 "seek_data": false, 00:09:40.165 "copy": false, 00:09:40.165 "nvme_iov_md": false 00:09:40.165 }, 00:09:40.165 "memory_domains": [ 00:09:40.165 { 00:09:40.165 "dma_device_id": "system", 00:09:40.165 "dma_device_type": 1 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.165 "dma_device_type": 2 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "dma_device_id": "system", 00:09:40.165 "dma_device_type": 1 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.165 "dma_device_type": 2 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "dma_device_id": "system", 00:09:40.165 "dma_device_type": 1 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.165 "dma_device_type": 2 00:09:40.165 } 00:09:40.165 ], 00:09:40.165 "driver_specific": { 00:09:40.165 "raid": { 00:09:40.165 "uuid": "c25d26c4-9c8d-4b02-960b-1d6f247feda2", 00:09:40.165 "strip_size_kb": 64, 00:09:40.165 "state": "online", 00:09:40.165 "raid_level": "concat", 00:09:40.165 "superblock": true, 00:09:40.165 "num_base_bdevs": 3, 00:09:40.165 "num_base_bdevs_discovered": 3, 00:09:40.165 "num_base_bdevs_operational": 3, 00:09:40.165 "base_bdevs_list": [ 00:09:40.165 { 00:09:40.165 "name": "NewBaseBdev", 00:09:40.165 "uuid": "fa58a2af-2486-434d-9266-e5ece0fb2df9", 00:09:40.165 "is_configured": true, 00:09:40.165 "data_offset": 2048, 00:09:40.165 "data_size": 63488 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "name": "BaseBdev2", 00:09:40.165 "uuid": "86a52240-6749-48cd-b49c-522354b6f372", 00:09:40.165 "is_configured": true, 00:09:40.165 "data_offset": 2048, 00:09:40.165 "data_size": 63488 00:09:40.165 }, 00:09:40.165 { 00:09:40.165 "name": "BaseBdev3", 00:09:40.165 "uuid": "88598972-167f-4a74-9aa6-e3fc5787dfa7", 00:09:40.165 "is_configured": true, 00:09:40.165 "data_offset": 2048, 00:09:40.165 "data_size": 63488 00:09:40.165 } 00:09:40.165 ] 00:09:40.165 } 00:09:40.165 } 00:09:40.165 }' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:40.165 BaseBdev2 00:09:40.165 BaseBdev3' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.165 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.166 [2024-12-12 05:47:47.673235] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.166 [2024-12-12 05:47:47.673263] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:40.166 [2024-12-12 05:47:47.673334] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:40.166 [2024-12-12 05:47:47.673388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:40.166 [2024-12-12 05:47:47.673399] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67187 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67187 ']' 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67187 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:40.166 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67187 00:09:40.425 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:40.425 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:40.425 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67187' 00:09:40.425 killing process with pid 67187 00:09:40.425 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67187 00:09:40.425 [2024-12-12 05:47:47.718085] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:40.425 05:47:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67187 00:09:40.685 [2024-12-12 05:47:48.009179] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:41.647 05:47:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:41.647 ************************************ 00:09:41.647 END TEST raid_state_function_test_sb 00:09:41.647 ************************************ 00:09:41.647 00:09:41.647 real 0m10.299s 00:09:41.647 user 0m16.427s 00:09:41.647 sys 0m1.769s 00:09:41.647 05:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:41.647 05:47:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.647 05:47:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:09:41.647 05:47:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:41.647 05:47:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:41.647 05:47:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:41.647 ************************************ 00:09:41.647 START TEST raid_superblock_test 00:09:41.647 ************************************ 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:41.647 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:41.907 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=67806 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 67806 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 67806 ']' 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:41.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:41.908 05:47:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.908 [2024-12-12 05:47:49.251535] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:41.908 [2024-12-12 05:47:49.251748] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67806 ] 00:09:41.908 [2024-12-12 05:47:49.403832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.167 [2024-12-12 05:47:49.511575] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.427 [2024-12-12 05:47:49.704163] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.427 [2024-12-12 05:47:49.704296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 malloc1 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.687 [2024-12-12 05:47:50.118411] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:42.687 [2024-12-12 05:47:50.118471] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.687 [2024-12-12 05:47:50.118493] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:42.687 [2024-12-12 05:47:50.118519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.687 [2024-12-12 05:47:50.120603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.687 [2024-12-12 05:47:50.120639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:42.687 pt1 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.687 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.688 malloc2 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.688 [2024-12-12 05:47:50.174687] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:42.688 [2024-12-12 05:47:50.174792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.688 [2024-12-12 05:47:50.174834] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:42.688 [2024-12-12 05:47:50.174863] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.688 [2024-12-12 05:47:50.177012] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.688 [2024-12-12 05:47:50.177096] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:42.688 pt2 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.688 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 malloc3 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 [2024-12-12 05:47:50.242947] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:42.948 [2024-12-12 05:47:50.243046] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:42.948 [2024-12-12 05:47:50.243085] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:42.948 [2024-12-12 05:47:50.243114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:42.948 [2024-12-12 05:47:50.245136] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:42.948 [2024-12-12 05:47:50.245219] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:42.948 pt3 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 [2024-12-12 05:47:50.254975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:42.948 [2024-12-12 05:47:50.256738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:42.948 [2024-12-12 05:47:50.256839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:42.948 [2024-12-12 05:47:50.257032] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:42.948 [2024-12-12 05:47:50.257081] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:42.948 [2024-12-12 05:47:50.257355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:42.948 [2024-12-12 05:47:50.257546] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:42.948 [2024-12-12 05:47:50.257585] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:42.948 [2024-12-12 05:47:50.257785] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:42.948 "name": "raid_bdev1", 00:09:42.948 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:42.948 "strip_size_kb": 64, 00:09:42.948 "state": "online", 00:09:42.948 "raid_level": "concat", 00:09:42.948 "superblock": true, 00:09:42.948 "num_base_bdevs": 3, 00:09:42.948 "num_base_bdevs_discovered": 3, 00:09:42.948 "num_base_bdevs_operational": 3, 00:09:42.948 "base_bdevs_list": [ 00:09:42.948 { 00:09:42.948 "name": "pt1", 00:09:42.948 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 }, 00:09:42.948 { 00:09:42.948 "name": "pt2", 00:09:42.948 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 }, 00:09:42.948 { 00:09:42.948 "name": "pt3", 00:09:42.948 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:42.948 "is_configured": true, 00:09:42.948 "data_offset": 2048, 00:09:42.948 "data_size": 63488 00:09:42.948 } 00:09:42.948 ] 00:09:42.948 }' 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:42.948 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.208 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:43.208 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.209 [2024-12-12 05:47:50.670600] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:43.209 "name": "raid_bdev1", 00:09:43.209 "aliases": [ 00:09:43.209 "c7d21535-ee74-4840-b3bd-728c54c2f184" 00:09:43.209 ], 00:09:43.209 "product_name": "Raid Volume", 00:09:43.209 "block_size": 512, 00:09:43.209 "num_blocks": 190464, 00:09:43.209 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:43.209 "assigned_rate_limits": { 00:09:43.209 "rw_ios_per_sec": 0, 00:09:43.209 "rw_mbytes_per_sec": 0, 00:09:43.209 "r_mbytes_per_sec": 0, 00:09:43.209 "w_mbytes_per_sec": 0 00:09:43.209 }, 00:09:43.209 "claimed": false, 00:09:43.209 "zoned": false, 00:09:43.209 "supported_io_types": { 00:09:43.209 "read": true, 00:09:43.209 "write": true, 00:09:43.209 "unmap": true, 00:09:43.209 "flush": true, 00:09:43.209 "reset": true, 00:09:43.209 "nvme_admin": false, 00:09:43.209 "nvme_io": false, 00:09:43.209 "nvme_io_md": false, 00:09:43.209 "write_zeroes": true, 00:09:43.209 "zcopy": false, 00:09:43.209 "get_zone_info": false, 00:09:43.209 "zone_management": false, 00:09:43.209 "zone_append": false, 00:09:43.209 "compare": false, 00:09:43.209 "compare_and_write": false, 00:09:43.209 "abort": false, 00:09:43.209 "seek_hole": false, 00:09:43.209 "seek_data": false, 00:09:43.209 "copy": false, 00:09:43.209 "nvme_iov_md": false 00:09:43.209 }, 00:09:43.209 "memory_domains": [ 00:09:43.209 { 00:09:43.209 "dma_device_id": "system", 00:09:43.209 "dma_device_type": 1 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.209 "dma_device_type": 2 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "dma_device_id": "system", 00:09:43.209 "dma_device_type": 1 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.209 "dma_device_type": 2 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "dma_device_id": "system", 00:09:43.209 "dma_device_type": 1 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.209 "dma_device_type": 2 00:09:43.209 } 00:09:43.209 ], 00:09:43.209 "driver_specific": { 00:09:43.209 "raid": { 00:09:43.209 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:43.209 "strip_size_kb": 64, 00:09:43.209 "state": "online", 00:09:43.209 "raid_level": "concat", 00:09:43.209 "superblock": true, 00:09:43.209 "num_base_bdevs": 3, 00:09:43.209 "num_base_bdevs_discovered": 3, 00:09:43.209 "num_base_bdevs_operational": 3, 00:09:43.209 "base_bdevs_list": [ 00:09:43.209 { 00:09:43.209 "name": "pt1", 00:09:43.209 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.209 "is_configured": true, 00:09:43.209 "data_offset": 2048, 00:09:43.209 "data_size": 63488 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "name": "pt2", 00:09:43.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.209 "is_configured": true, 00:09:43.209 "data_offset": 2048, 00:09:43.209 "data_size": 63488 00:09:43.209 }, 00:09:43.209 { 00:09:43.209 "name": "pt3", 00:09:43.209 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.209 "is_configured": true, 00:09:43.209 "data_offset": 2048, 00:09:43.209 "data_size": 63488 00:09:43.209 } 00:09:43.209 ] 00:09:43.209 } 00:09:43.209 } 00:09:43.209 }' 00:09:43.209 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:43.469 pt2 00:09:43.469 pt3' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:43.469 [2024-12-12 05:47:50.918053] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.469 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c7d21535-ee74-4840-b3bd-728c54c2f184 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c7d21535-ee74-4840-b3bd-728c54c2f184 ']' 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.470 [2024-12-12 05:47:50.957731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.470 [2024-12-12 05:47:50.957796] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.470 [2024-12-12 05:47:50.957870] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.470 [2024-12-12 05:47:50.957929] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.470 [2024-12-12 05:47:50.957938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.470 05:47:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 [2024-12-12 05:47:51.113528] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:43.730 [2024-12-12 05:47:51.115356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:43.730 [2024-12-12 05:47:51.115406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:43.730 [2024-12-12 05:47:51.115452] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:43.730 [2024-12-12 05:47:51.115515] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:43.730 [2024-12-12 05:47:51.115534] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:43.730 [2024-12-12 05:47:51.115550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:43.730 [2024-12-12 05:47:51.115558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:43.730 request: 00:09:43.730 { 00:09:43.730 "name": "raid_bdev1", 00:09:43.730 "raid_level": "concat", 00:09:43.730 "base_bdevs": [ 00:09:43.730 "malloc1", 00:09:43.730 "malloc2", 00:09:43.730 "malloc3" 00:09:43.730 ], 00:09:43.730 "strip_size_kb": 64, 00:09:43.730 "superblock": false, 00:09:43.730 "method": "bdev_raid_create", 00:09:43.730 "req_id": 1 00:09:43.730 } 00:09:43.730 Got JSON-RPC error response 00:09:43.730 response: 00:09:43.730 { 00:09:43.730 "code": -17, 00:09:43.730 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:43.730 } 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 [2024-12-12 05:47:51.165375] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:43.730 [2024-12-12 05:47:51.165465] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:43.730 [2024-12-12 05:47:51.165509] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:43.730 [2024-12-12 05:47:51.165539] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:43.730 [2024-12-12 05:47:51.167840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:43.730 [2024-12-12 05:47:51.167909] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:43.730 [2024-12-12 05:47:51.168018] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:43.730 [2024-12-12 05:47:51.168107] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:43.730 pt1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.730 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:43.730 "name": "raid_bdev1", 00:09:43.730 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:43.730 "strip_size_kb": 64, 00:09:43.730 "state": "configuring", 00:09:43.730 "raid_level": "concat", 00:09:43.731 "superblock": true, 00:09:43.731 "num_base_bdevs": 3, 00:09:43.731 "num_base_bdevs_discovered": 1, 00:09:43.731 "num_base_bdevs_operational": 3, 00:09:43.731 "base_bdevs_list": [ 00:09:43.731 { 00:09:43.731 "name": "pt1", 00:09:43.731 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:43.731 "is_configured": true, 00:09:43.731 "data_offset": 2048, 00:09:43.731 "data_size": 63488 00:09:43.731 }, 00:09:43.731 { 00:09:43.731 "name": null, 00:09:43.731 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:43.731 "is_configured": false, 00:09:43.731 "data_offset": 2048, 00:09:43.731 "data_size": 63488 00:09:43.731 }, 00:09:43.731 { 00:09:43.731 "name": null, 00:09:43.731 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:43.731 "is_configured": false, 00:09:43.731 "data_offset": 2048, 00:09:43.731 "data_size": 63488 00:09:43.731 } 00:09:43.731 ] 00:09:43.731 }' 00:09:43.731 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:43.731 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:44.300 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.300 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.300 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.300 [2024-12-12 05:47:51.608649] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.300 [2024-12-12 05:47:51.608767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.300 [2024-12-12 05:47:51.608811] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:44.300 [2024-12-12 05:47:51.608838] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.300 [2024-12-12 05:47:51.609307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.300 [2024-12-12 05:47:51.609370] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.300 [2024-12-12 05:47:51.609514] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.301 [2024-12-12 05:47:51.609588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.301 pt2 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.301 [2024-12-12 05:47:51.616643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.301 "name": "raid_bdev1", 00:09:44.301 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:44.301 "strip_size_kb": 64, 00:09:44.301 "state": "configuring", 00:09:44.301 "raid_level": "concat", 00:09:44.301 "superblock": true, 00:09:44.301 "num_base_bdevs": 3, 00:09:44.301 "num_base_bdevs_discovered": 1, 00:09:44.301 "num_base_bdevs_operational": 3, 00:09:44.301 "base_bdevs_list": [ 00:09:44.301 { 00:09:44.301 "name": "pt1", 00:09:44.301 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.301 "is_configured": true, 00:09:44.301 "data_offset": 2048, 00:09:44.301 "data_size": 63488 00:09:44.301 }, 00:09:44.301 { 00:09:44.301 "name": null, 00:09:44.301 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.301 "is_configured": false, 00:09:44.301 "data_offset": 0, 00:09:44.301 "data_size": 63488 00:09:44.301 }, 00:09:44.301 { 00:09:44.301 "name": null, 00:09:44.301 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.301 "is_configured": false, 00:09:44.301 "data_offset": 2048, 00:09:44.301 "data_size": 63488 00:09:44.301 } 00:09:44.301 ] 00:09:44.301 }' 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.301 05:47:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.560 [2024-12-12 05:47:52.039913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:44.560 [2024-12-12 05:47:52.039984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.560 [2024-12-12 05:47:52.040002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:44.560 [2024-12-12 05:47:52.040012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.560 [2024-12-12 05:47:52.040458] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.560 [2024-12-12 05:47:52.040479] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:44.560 [2024-12-12 05:47:52.040572] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:44.560 [2024-12-12 05:47:52.040598] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:44.560 pt2 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.560 [2024-12-12 05:47:52.051862] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:44.560 [2024-12-12 05:47:52.051913] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:44.560 [2024-12-12 05:47:52.051926] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:44.560 [2024-12-12 05:47:52.051936] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:44.560 [2024-12-12 05:47:52.052288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:44.560 [2024-12-12 05:47:52.052309] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:44.560 [2024-12-12 05:47:52.052370] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:44.560 [2024-12-12 05:47:52.052390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:44.560 [2024-12-12 05:47:52.052522] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:44.560 [2024-12-12 05:47:52.052535] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:44.560 [2024-12-12 05:47:52.052767] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:44.560 [2024-12-12 05:47:52.052957] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:44.560 [2024-12-12 05:47:52.052973] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:44.560 [2024-12-12 05:47:52.053125] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.560 pt3 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:44.560 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:44.820 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.820 "name": "raid_bdev1", 00:09:44.820 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:44.820 "strip_size_kb": 64, 00:09:44.820 "state": "online", 00:09:44.820 "raid_level": "concat", 00:09:44.820 "superblock": true, 00:09:44.820 "num_base_bdevs": 3, 00:09:44.820 "num_base_bdevs_discovered": 3, 00:09:44.820 "num_base_bdevs_operational": 3, 00:09:44.820 "base_bdevs_list": [ 00:09:44.820 { 00:09:44.820 "name": "pt1", 00:09:44.820 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:44.820 "is_configured": true, 00:09:44.820 "data_offset": 2048, 00:09:44.820 "data_size": 63488 00:09:44.820 }, 00:09:44.820 { 00:09:44.820 "name": "pt2", 00:09:44.820 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:44.820 "is_configured": true, 00:09:44.820 "data_offset": 2048, 00:09:44.820 "data_size": 63488 00:09:44.820 }, 00:09:44.820 { 00:09:44.820 "name": "pt3", 00:09:44.820 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:44.820 "is_configured": true, 00:09:44.820 "data_offset": 2048, 00:09:44.820 "data_size": 63488 00:09:44.820 } 00:09:44.820 ] 00:09:44.820 }' 00:09:44.820 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.820 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:45.080 [2024-12-12 05:47:52.527405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:45.080 "name": "raid_bdev1", 00:09:45.080 "aliases": [ 00:09:45.080 "c7d21535-ee74-4840-b3bd-728c54c2f184" 00:09:45.080 ], 00:09:45.080 "product_name": "Raid Volume", 00:09:45.080 "block_size": 512, 00:09:45.080 "num_blocks": 190464, 00:09:45.080 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:45.080 "assigned_rate_limits": { 00:09:45.080 "rw_ios_per_sec": 0, 00:09:45.080 "rw_mbytes_per_sec": 0, 00:09:45.080 "r_mbytes_per_sec": 0, 00:09:45.080 "w_mbytes_per_sec": 0 00:09:45.080 }, 00:09:45.080 "claimed": false, 00:09:45.080 "zoned": false, 00:09:45.080 "supported_io_types": { 00:09:45.080 "read": true, 00:09:45.080 "write": true, 00:09:45.080 "unmap": true, 00:09:45.080 "flush": true, 00:09:45.080 "reset": true, 00:09:45.080 "nvme_admin": false, 00:09:45.080 "nvme_io": false, 00:09:45.080 "nvme_io_md": false, 00:09:45.080 "write_zeroes": true, 00:09:45.080 "zcopy": false, 00:09:45.080 "get_zone_info": false, 00:09:45.080 "zone_management": false, 00:09:45.080 "zone_append": false, 00:09:45.080 "compare": false, 00:09:45.080 "compare_and_write": false, 00:09:45.080 "abort": false, 00:09:45.080 "seek_hole": false, 00:09:45.080 "seek_data": false, 00:09:45.080 "copy": false, 00:09:45.080 "nvme_iov_md": false 00:09:45.080 }, 00:09:45.080 "memory_domains": [ 00:09:45.080 { 00:09:45.080 "dma_device_id": "system", 00:09:45.080 "dma_device_type": 1 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.080 "dma_device_type": 2 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "dma_device_id": "system", 00:09:45.080 "dma_device_type": 1 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.080 "dma_device_type": 2 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "dma_device_id": "system", 00:09:45.080 "dma_device_type": 1 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.080 "dma_device_type": 2 00:09:45.080 } 00:09:45.080 ], 00:09:45.080 "driver_specific": { 00:09:45.080 "raid": { 00:09:45.080 "uuid": "c7d21535-ee74-4840-b3bd-728c54c2f184", 00:09:45.080 "strip_size_kb": 64, 00:09:45.080 "state": "online", 00:09:45.080 "raid_level": "concat", 00:09:45.080 "superblock": true, 00:09:45.080 "num_base_bdevs": 3, 00:09:45.080 "num_base_bdevs_discovered": 3, 00:09:45.080 "num_base_bdevs_operational": 3, 00:09:45.080 "base_bdevs_list": [ 00:09:45.080 { 00:09:45.080 "name": "pt1", 00:09:45.080 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:45.080 "is_configured": true, 00:09:45.080 "data_offset": 2048, 00:09:45.080 "data_size": 63488 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "name": "pt2", 00:09:45.080 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:45.080 "is_configured": true, 00:09:45.080 "data_offset": 2048, 00:09:45.080 "data_size": 63488 00:09:45.080 }, 00:09:45.080 { 00:09:45.080 "name": "pt3", 00:09:45.080 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:45.080 "is_configured": true, 00:09:45.080 "data_offset": 2048, 00:09:45.080 "data_size": 63488 00:09:45.080 } 00:09:45.080 ] 00:09:45.080 } 00:09:45.080 } 00:09:45.080 }' 00:09:45.080 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:45.340 pt2 00:09:45.340 pt3' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.340 [2024-12-12 05:47:52.794890] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c7d21535-ee74-4840-b3bd-728c54c2f184 '!=' c7d21535-ee74-4840-b3bd-728c54c2f184 ']' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 67806 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 67806 ']' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 67806 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:45.340 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67806 00:09:45.600 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:45.600 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:45.600 killing process with pid 67806 00:09:45.600 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67806' 00:09:45.600 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 67806 00:09:45.600 [2024-12-12 05:47:52.872892] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:45.600 [2024-12-12 05:47:52.872994] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.600 [2024-12-12 05:47:52.873055] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:45.600 [2024-12-12 05:47:52.873067] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:45.600 05:47:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 67806 00:09:45.859 [2024-12-12 05:47:53.165292] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.796 ************************************ 00:09:46.796 END TEST raid_superblock_test 00:09:46.796 ************************************ 00:09:46.796 05:47:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:46.796 00:09:46.796 real 0m5.083s 00:09:46.796 user 0m7.331s 00:09:46.796 sys 0m0.788s 00:09:46.796 05:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.796 05:47:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.796 05:47:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:09:46.796 05:47:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:46.796 05:47:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.796 05:47:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:47.056 ************************************ 00:09:47.056 START TEST raid_read_error_test 00:09:47.056 ************************************ 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.mX30tDq1au 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68059 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68059 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 68059 ']' 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:47.056 05:47:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.056 [2024-12-12 05:47:54.426454] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:47.057 [2024-12-12 05:47:54.426583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68059 ] 00:09:47.316 [2024-12-12 05:47:54.597439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.316 [2024-12-12 05:47:54.708070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.576 [2024-12-12 05:47:54.900881] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.576 [2024-12-12 05:47:54.900935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.841 BaseBdev1_malloc 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.841 true 00:09:47.841 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.842 [2024-12-12 05:47:55.292356] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:47.842 [2024-12-12 05:47:55.292493] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.842 [2024-12-12 05:47:55.292534] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:47.842 [2024-12-12 05:47:55.292546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.842 [2024-12-12 05:47:55.294604] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.842 [2024-12-12 05:47:55.294643] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:47.842 BaseBdev1 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.842 BaseBdev2_malloc 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.842 true 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.842 [2024-12-12 05:47:55.353340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:47.842 [2024-12-12 05:47:55.353395] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.842 [2024-12-12 05:47:55.353410] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:47.842 [2024-12-12 05:47:55.353420] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.842 [2024-12-12 05:47:55.355422] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.842 [2024-12-12 05:47:55.355460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:47.842 BaseBdev2 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.842 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.110 BaseBdev3_malloc 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.110 true 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.110 [2024-12-12 05:47:55.424412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:48.110 [2024-12-12 05:47:55.424463] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.110 [2024-12-12 05:47:55.424480] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:48.110 [2024-12-12 05:47:55.424490] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.110 [2024-12-12 05:47:55.426587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.110 [2024-12-12 05:47:55.426622] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:48.110 BaseBdev3 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.110 [2024-12-12 05:47:55.436469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.110 [2024-12-12 05:47:55.438190] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.110 [2024-12-12 05:47:55.438342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.110 [2024-12-12 05:47:55.438580] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:48.110 [2024-12-12 05:47:55.438594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:48.110 [2024-12-12 05:47:55.438828] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:48.110 [2024-12-12 05:47:55.438993] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:48.110 [2024-12-12 05:47:55.439006] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:48.110 [2024-12-12 05:47:55.439138] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.110 "name": "raid_bdev1", 00:09:48.110 "uuid": "016f403a-92f5-4669-938e-eb59e38b12a7", 00:09:48.110 "strip_size_kb": 64, 00:09:48.110 "state": "online", 00:09:48.110 "raid_level": "concat", 00:09:48.110 "superblock": true, 00:09:48.110 "num_base_bdevs": 3, 00:09:48.110 "num_base_bdevs_discovered": 3, 00:09:48.110 "num_base_bdevs_operational": 3, 00:09:48.110 "base_bdevs_list": [ 00:09:48.110 { 00:09:48.110 "name": "BaseBdev1", 00:09:48.110 "uuid": "0af3cb8a-4003-5f36-8d00-807429f85f8d", 00:09:48.110 "is_configured": true, 00:09:48.110 "data_offset": 2048, 00:09:48.110 "data_size": 63488 00:09:48.110 }, 00:09:48.110 { 00:09:48.110 "name": "BaseBdev2", 00:09:48.110 "uuid": "01cd3aec-3a25-56b9-9b01-7df3f5bd45d7", 00:09:48.110 "is_configured": true, 00:09:48.110 "data_offset": 2048, 00:09:48.110 "data_size": 63488 00:09:48.110 }, 00:09:48.110 { 00:09:48.110 "name": "BaseBdev3", 00:09:48.110 "uuid": "70b3ae53-69b9-5748-914d-826f2f892e6d", 00:09:48.110 "is_configured": true, 00:09:48.110 "data_offset": 2048, 00:09:48.110 "data_size": 63488 00:09:48.110 } 00:09:48.110 ] 00:09:48.110 }' 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.110 05:47:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.369 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:48.369 05:47:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:48.629 [2024-12-12 05:47:55.900879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.568 "name": "raid_bdev1", 00:09:49.568 "uuid": "016f403a-92f5-4669-938e-eb59e38b12a7", 00:09:49.568 "strip_size_kb": 64, 00:09:49.568 "state": "online", 00:09:49.568 "raid_level": "concat", 00:09:49.568 "superblock": true, 00:09:49.568 "num_base_bdevs": 3, 00:09:49.568 "num_base_bdevs_discovered": 3, 00:09:49.568 "num_base_bdevs_operational": 3, 00:09:49.568 "base_bdevs_list": [ 00:09:49.568 { 00:09:49.568 "name": "BaseBdev1", 00:09:49.568 "uuid": "0af3cb8a-4003-5f36-8d00-807429f85f8d", 00:09:49.568 "is_configured": true, 00:09:49.568 "data_offset": 2048, 00:09:49.568 "data_size": 63488 00:09:49.568 }, 00:09:49.568 { 00:09:49.568 "name": "BaseBdev2", 00:09:49.568 "uuid": "01cd3aec-3a25-56b9-9b01-7df3f5bd45d7", 00:09:49.568 "is_configured": true, 00:09:49.568 "data_offset": 2048, 00:09:49.568 "data_size": 63488 00:09:49.568 }, 00:09:49.568 { 00:09:49.568 "name": "BaseBdev3", 00:09:49.568 "uuid": "70b3ae53-69b9-5748-914d-826f2f892e6d", 00:09:49.568 "is_configured": true, 00:09:49.568 "data_offset": 2048, 00:09:49.568 "data_size": 63488 00:09:49.568 } 00:09:49.568 ] 00:09:49.568 }' 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.568 05:47:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.828 [2024-12-12 05:47:57.246902] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.828 [2024-12-12 05:47:57.247002] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.828 [2024-12-12 05:47:57.249678] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.828 [2024-12-12 05:47:57.249760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.828 [2024-12-12 05:47:57.249816] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.828 [2024-12-12 05:47:57.249857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:49.828 { 00:09:49.828 "results": [ 00:09:49.828 { 00:09:49.828 "job": "raid_bdev1", 00:09:49.828 "core_mask": "0x1", 00:09:49.828 "workload": "randrw", 00:09:49.828 "percentage": 50, 00:09:49.828 "status": "finished", 00:09:49.828 "queue_depth": 1, 00:09:49.828 "io_size": 131072, 00:09:49.828 "runtime": 1.347065, 00:09:49.828 "iops": 16269.445052762858, 00:09:49.828 "mibps": 2033.6806315953572, 00:09:49.828 "io_failed": 1, 00:09:49.828 "io_timeout": 0, 00:09:49.828 "avg_latency_us": 85.10461281775049, 00:09:49.828 "min_latency_us": 25.152838427947597, 00:09:49.828 "max_latency_us": 1352.216593886463 00:09:49.828 } 00:09:49.828 ], 00:09:49.828 "core_count": 1 00:09:49.828 } 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68059 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 68059 ']' 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 68059 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:49.828 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68059 00:09:49.829 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:49.829 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:49.829 killing process with pid 68059 00:09:49.829 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68059' 00:09:49.829 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 68059 00:09:49.829 [2024-12-12 05:47:57.283852] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:49.829 05:47:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 68059 00:09:50.089 [2024-12-12 05:47:57.503138] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.mX30tDq1au 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:51.468 ************************************ 00:09:51.468 END TEST raid_read_error_test 00:09:51.468 ************************************ 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:51.468 00:09:51.468 real 0m4.292s 00:09:51.468 user 0m5.040s 00:09:51.468 sys 0m0.519s 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.468 05:47:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.468 05:47:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:09:51.468 05:47:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:51.468 05:47:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.468 05:47:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:51.468 ************************************ 00:09:51.468 START TEST raid_write_error_test 00:09:51.468 ************************************ 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:51.468 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.nZ4IHAMJfS 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=68199 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 68199 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 68199 ']' 00:09:51.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.469 05:47:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.469 [2024-12-12 05:47:58.786756] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:51.469 [2024-12-12 05:47:58.786968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68199 ] 00:09:51.469 [2024-12-12 05:47:58.956187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.728 [2024-12-12 05:47:59.063223] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.987 [2024-12-12 05:47:59.252977] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:51.987 [2024-12-12 05:47:59.253012] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 BaseBdev1_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 true 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 [2024-12-12 05:47:59.656464] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:52.247 [2024-12-12 05:47:59.656589] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.247 [2024-12-12 05:47:59.656613] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:52.247 [2024-12-12 05:47:59.656624] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.247 [2024-12-12 05:47:59.658674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.247 [2024-12-12 05:47:59.658713] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:52.247 BaseBdev1 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 BaseBdev2_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 true 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.247 [2024-12-12 05:47:59.709149] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:52.247 [2024-12-12 05:47:59.709197] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.247 [2024-12-12 05:47:59.709211] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:52.247 [2024-12-12 05:47:59.709221] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.247 [2024-12-12 05:47:59.711259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.247 [2024-12-12 05:47:59.711297] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:52.247 BaseBdev2 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.247 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.507 BaseBdev3_malloc 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.507 true 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.507 [2024-12-12 05:47:59.799367] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:52.507 [2024-12-12 05:47:59.799416] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.507 [2024-12-12 05:47:59.799432] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:52.507 [2024-12-12 05:47:59.799443] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.507 [2024-12-12 05:47:59.801491] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.507 [2024-12-12 05:47:59.801538] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:52.507 BaseBdev3 00:09:52.507 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.508 [2024-12-12 05:47:59.807425] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:52.508 [2024-12-12 05:47:59.809181] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:52.508 [2024-12-12 05:47:59.809250] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.508 [2024-12-12 05:47:59.809435] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:52.508 [2024-12-12 05:47:59.809448] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:52.508 [2024-12-12 05:47:59.809684] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:09:52.508 [2024-12-12 05:47:59.809825] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:52.508 [2024-12-12 05:47:59.809838] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:52.508 [2024-12-12 05:47:59.809962] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:52.508 "name": "raid_bdev1", 00:09:52.508 "uuid": "990d0362-90c3-4ee7-bb8c-ed232b8b5a63", 00:09:52.508 "strip_size_kb": 64, 00:09:52.508 "state": "online", 00:09:52.508 "raid_level": "concat", 00:09:52.508 "superblock": true, 00:09:52.508 "num_base_bdevs": 3, 00:09:52.508 "num_base_bdevs_discovered": 3, 00:09:52.508 "num_base_bdevs_operational": 3, 00:09:52.508 "base_bdevs_list": [ 00:09:52.508 { 00:09:52.508 "name": "BaseBdev1", 00:09:52.508 "uuid": "b67dfe45-8d87-59eb-a784-ac072da6e8d6", 00:09:52.508 "is_configured": true, 00:09:52.508 "data_offset": 2048, 00:09:52.508 "data_size": 63488 00:09:52.508 }, 00:09:52.508 { 00:09:52.508 "name": "BaseBdev2", 00:09:52.508 "uuid": "4a860ae3-3499-5fa3-a918-bc6a72fb2fdf", 00:09:52.508 "is_configured": true, 00:09:52.508 "data_offset": 2048, 00:09:52.508 "data_size": 63488 00:09:52.508 }, 00:09:52.508 { 00:09:52.508 "name": "BaseBdev3", 00:09:52.508 "uuid": "89d1d5cd-9adb-599f-b32d-c7aab1524bbe", 00:09:52.508 "is_configured": true, 00:09:52.508 "data_offset": 2048, 00:09:52.508 "data_size": 63488 00:09:52.508 } 00:09:52.508 ] 00:09:52.508 }' 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:52.508 05:47:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.768 05:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:52.768 05:48:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:53.028 [2024-12-12 05:48:00.319815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:53.967 "name": "raid_bdev1", 00:09:53.967 "uuid": "990d0362-90c3-4ee7-bb8c-ed232b8b5a63", 00:09:53.967 "strip_size_kb": 64, 00:09:53.967 "state": "online", 00:09:53.967 "raid_level": "concat", 00:09:53.967 "superblock": true, 00:09:53.967 "num_base_bdevs": 3, 00:09:53.967 "num_base_bdevs_discovered": 3, 00:09:53.967 "num_base_bdevs_operational": 3, 00:09:53.967 "base_bdevs_list": [ 00:09:53.967 { 00:09:53.967 "name": "BaseBdev1", 00:09:53.967 "uuid": "b67dfe45-8d87-59eb-a784-ac072da6e8d6", 00:09:53.967 "is_configured": true, 00:09:53.967 "data_offset": 2048, 00:09:53.967 "data_size": 63488 00:09:53.967 }, 00:09:53.967 { 00:09:53.967 "name": "BaseBdev2", 00:09:53.967 "uuid": "4a860ae3-3499-5fa3-a918-bc6a72fb2fdf", 00:09:53.967 "is_configured": true, 00:09:53.967 "data_offset": 2048, 00:09:53.967 "data_size": 63488 00:09:53.967 }, 00:09:53.967 { 00:09:53.967 "name": "BaseBdev3", 00:09:53.967 "uuid": "89d1d5cd-9adb-599f-b32d-c7aab1524bbe", 00:09:53.967 "is_configured": true, 00:09:53.967 "data_offset": 2048, 00:09:53.967 "data_size": 63488 00:09:53.967 } 00:09:53.967 ] 00:09:53.967 }' 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:53.967 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.226 [2024-12-12 05:48:01.661963] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:54.226 [2024-12-12 05:48:01.662029] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:54.226 [2024-12-12 05:48:01.664727] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:54.226 [2024-12-12 05:48:01.664828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:54.226 [2024-12-12 05:48:01.664884] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:54.226 [2024-12-12 05:48:01.664916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:54.226 { 00:09:54.226 "results": [ 00:09:54.226 { 00:09:54.226 "job": "raid_bdev1", 00:09:54.226 "core_mask": "0x1", 00:09:54.226 "workload": "randrw", 00:09:54.226 "percentage": 50, 00:09:54.226 "status": "finished", 00:09:54.226 "queue_depth": 1, 00:09:54.226 "io_size": 131072, 00:09:54.226 "runtime": 1.34309, 00:09:54.226 "iops": 16383.116544684273, 00:09:54.226 "mibps": 2047.8895680855342, 00:09:54.226 "io_failed": 1, 00:09:54.226 "io_timeout": 0, 00:09:54.226 "avg_latency_us": 84.51813980347856, 00:09:54.226 "min_latency_us": 25.041048034934498, 00:09:54.226 "max_latency_us": 1345.0620087336245 00:09:54.226 } 00:09:54.226 ], 00:09:54.226 "core_count": 1 00:09:54.226 } 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 68199 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 68199 ']' 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 68199 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68199 00:09:54.226 killing process with pid 68199 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68199' 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 68199 00:09:54.226 05:48:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 68199 00:09:54.226 [2024-12-12 05:48:01.706192] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:54.486 [2024-12-12 05:48:01.916773] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.nZ4IHAMJfS 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:55.866 00:09:55.866 real 0m4.354s 00:09:55.866 user 0m5.150s 00:09:55.866 sys 0m0.516s 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.866 05:48:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.866 ************************************ 00:09:55.866 END TEST raid_write_error_test 00:09:55.866 ************************************ 00:09:55.866 05:48:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:55.866 05:48:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:09:55.866 05:48:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:55.866 05:48:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.866 05:48:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:55.866 ************************************ 00:09:55.866 START TEST raid_state_function_test 00:09:55.866 ************************************ 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=68337 00:09:55.866 Process raid pid: 68337 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68337' 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 68337 00:09:55.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 68337 ']' 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.866 05:48:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.866 [2024-12-12 05:48:03.209700] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:09:55.866 [2024-12-12 05:48:03.209813] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:55.867 [2024-12-12 05:48:03.382767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.126 [2024-12-12 05:48:03.499009] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.386 [2024-12-12 05:48:03.702028] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.386 [2024-12-12 05:48:03.702066] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.646 [2024-12-12 05:48:04.029615] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:56.646 [2024-12-12 05:48:04.029664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:56.646 [2024-12-12 05:48:04.029674] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.646 [2024-12-12 05:48:04.029683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.646 [2024-12-12 05:48:04.029690] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.646 [2024-12-12 05:48:04.029699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.646 "name": "Existed_Raid", 00:09:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.646 "strip_size_kb": 0, 00:09:56.646 "state": "configuring", 00:09:56.646 "raid_level": "raid1", 00:09:56.646 "superblock": false, 00:09:56.646 "num_base_bdevs": 3, 00:09:56.646 "num_base_bdevs_discovered": 0, 00:09:56.646 "num_base_bdevs_operational": 3, 00:09:56.646 "base_bdevs_list": [ 00:09:56.646 { 00:09:56.646 "name": "BaseBdev1", 00:09:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.646 "is_configured": false, 00:09:56.646 "data_offset": 0, 00:09:56.646 "data_size": 0 00:09:56.646 }, 00:09:56.646 { 00:09:56.646 "name": "BaseBdev2", 00:09:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.646 "is_configured": false, 00:09:56.646 "data_offset": 0, 00:09:56.646 "data_size": 0 00:09:56.646 }, 00:09:56.646 { 00:09:56.646 "name": "BaseBdev3", 00:09:56.646 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.646 "is_configured": false, 00:09:56.646 "data_offset": 0, 00:09:56.646 "data_size": 0 00:09:56.646 } 00:09:56.646 ] 00:09:56.646 }' 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.646 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.216 [2024-12-12 05:48:04.496729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.216 [2024-12-12 05:48:04.496804] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.216 [2024-12-12 05:48:04.504716] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:57.216 [2024-12-12 05:48:04.504797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:57.216 [2024-12-12 05:48:04.504859] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.216 [2024-12-12 05:48:04.504901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.216 [2024-12-12 05:48:04.504939] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.216 [2024-12-12 05:48:04.504966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.216 [2024-12-12 05:48:04.551925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.216 BaseBdev1 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:57.216 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.217 [ 00:09:57.217 { 00:09:57.217 "name": "BaseBdev1", 00:09:57.217 "aliases": [ 00:09:57.217 "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6" 00:09:57.217 ], 00:09:57.217 "product_name": "Malloc disk", 00:09:57.217 "block_size": 512, 00:09:57.217 "num_blocks": 65536, 00:09:57.217 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:57.217 "assigned_rate_limits": { 00:09:57.217 "rw_ios_per_sec": 0, 00:09:57.217 "rw_mbytes_per_sec": 0, 00:09:57.217 "r_mbytes_per_sec": 0, 00:09:57.217 "w_mbytes_per_sec": 0 00:09:57.217 }, 00:09:57.217 "claimed": true, 00:09:57.217 "claim_type": "exclusive_write", 00:09:57.217 "zoned": false, 00:09:57.217 "supported_io_types": { 00:09:57.217 "read": true, 00:09:57.217 "write": true, 00:09:57.217 "unmap": true, 00:09:57.217 "flush": true, 00:09:57.217 "reset": true, 00:09:57.217 "nvme_admin": false, 00:09:57.217 "nvme_io": false, 00:09:57.217 "nvme_io_md": false, 00:09:57.217 "write_zeroes": true, 00:09:57.217 "zcopy": true, 00:09:57.217 "get_zone_info": false, 00:09:57.217 "zone_management": false, 00:09:57.217 "zone_append": false, 00:09:57.217 "compare": false, 00:09:57.217 "compare_and_write": false, 00:09:57.217 "abort": true, 00:09:57.217 "seek_hole": false, 00:09:57.217 "seek_data": false, 00:09:57.217 "copy": true, 00:09:57.217 "nvme_iov_md": false 00:09:57.217 }, 00:09:57.217 "memory_domains": [ 00:09:57.217 { 00:09:57.217 "dma_device_id": "system", 00:09:57.217 "dma_device_type": 1 00:09:57.217 }, 00:09:57.217 { 00:09:57.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.217 "dma_device_type": 2 00:09:57.217 } 00:09:57.217 ], 00:09:57.217 "driver_specific": {} 00:09:57.217 } 00:09:57.217 ] 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.217 "name": "Existed_Raid", 00:09:57.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.217 "strip_size_kb": 0, 00:09:57.217 "state": "configuring", 00:09:57.217 "raid_level": "raid1", 00:09:57.217 "superblock": false, 00:09:57.217 "num_base_bdevs": 3, 00:09:57.217 "num_base_bdevs_discovered": 1, 00:09:57.217 "num_base_bdevs_operational": 3, 00:09:57.217 "base_bdevs_list": [ 00:09:57.217 { 00:09:57.217 "name": "BaseBdev1", 00:09:57.217 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:57.217 "is_configured": true, 00:09:57.217 "data_offset": 0, 00:09:57.217 "data_size": 65536 00:09:57.217 }, 00:09:57.217 { 00:09:57.217 "name": "BaseBdev2", 00:09:57.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.217 "is_configured": false, 00:09:57.217 "data_offset": 0, 00:09:57.217 "data_size": 0 00:09:57.217 }, 00:09:57.217 { 00:09:57.217 "name": "BaseBdev3", 00:09:57.217 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.217 "is_configured": false, 00:09:57.217 "data_offset": 0, 00:09:57.217 "data_size": 0 00:09:57.217 } 00:09:57.217 ] 00:09:57.217 }' 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.217 05:48:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.786 [2024-12-12 05:48:05.027150] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:57.786 [2024-12-12 05:48:05.027248] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.786 [2024-12-12 05:48:05.035177] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:57.786 [2024-12-12 05:48:05.037037] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:57.786 [2024-12-12 05:48:05.037112] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:57.786 [2024-12-12 05:48:05.037142] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:57.786 [2024-12-12 05:48:05.037164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.786 "name": "Existed_Raid", 00:09:57.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.786 "strip_size_kb": 0, 00:09:57.786 "state": "configuring", 00:09:57.786 "raid_level": "raid1", 00:09:57.786 "superblock": false, 00:09:57.786 "num_base_bdevs": 3, 00:09:57.786 "num_base_bdevs_discovered": 1, 00:09:57.786 "num_base_bdevs_operational": 3, 00:09:57.786 "base_bdevs_list": [ 00:09:57.786 { 00:09:57.786 "name": "BaseBdev1", 00:09:57.786 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:57.786 "is_configured": true, 00:09:57.786 "data_offset": 0, 00:09:57.786 "data_size": 65536 00:09:57.786 }, 00:09:57.786 { 00:09:57.786 "name": "BaseBdev2", 00:09:57.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.786 "is_configured": false, 00:09:57.786 "data_offset": 0, 00:09:57.786 "data_size": 0 00:09:57.786 }, 00:09:57.786 { 00:09:57.786 "name": "BaseBdev3", 00:09:57.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.786 "is_configured": false, 00:09:57.786 "data_offset": 0, 00:09:57.786 "data_size": 0 00:09:57.786 } 00:09:57.786 ] 00:09:57.786 }' 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.786 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 [2024-12-12 05:48:05.458541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.045 BaseBdev2 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.045 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.045 [ 00:09:58.045 { 00:09:58.045 "name": "BaseBdev2", 00:09:58.045 "aliases": [ 00:09:58.045 "8dd1e99a-96ef-4e4b-8ecb-67f581c77089" 00:09:58.045 ], 00:09:58.045 "product_name": "Malloc disk", 00:09:58.045 "block_size": 512, 00:09:58.045 "num_blocks": 65536, 00:09:58.045 "uuid": "8dd1e99a-96ef-4e4b-8ecb-67f581c77089", 00:09:58.045 "assigned_rate_limits": { 00:09:58.045 "rw_ios_per_sec": 0, 00:09:58.045 "rw_mbytes_per_sec": 0, 00:09:58.045 "r_mbytes_per_sec": 0, 00:09:58.045 "w_mbytes_per_sec": 0 00:09:58.045 }, 00:09:58.045 "claimed": true, 00:09:58.045 "claim_type": "exclusive_write", 00:09:58.045 "zoned": false, 00:09:58.045 "supported_io_types": { 00:09:58.045 "read": true, 00:09:58.045 "write": true, 00:09:58.045 "unmap": true, 00:09:58.045 "flush": true, 00:09:58.045 "reset": true, 00:09:58.045 "nvme_admin": false, 00:09:58.045 "nvme_io": false, 00:09:58.045 "nvme_io_md": false, 00:09:58.045 "write_zeroes": true, 00:09:58.045 "zcopy": true, 00:09:58.045 "get_zone_info": false, 00:09:58.045 "zone_management": false, 00:09:58.045 "zone_append": false, 00:09:58.045 "compare": false, 00:09:58.045 "compare_and_write": false, 00:09:58.045 "abort": true, 00:09:58.045 "seek_hole": false, 00:09:58.046 "seek_data": false, 00:09:58.046 "copy": true, 00:09:58.046 "nvme_iov_md": false 00:09:58.046 }, 00:09:58.046 "memory_domains": [ 00:09:58.046 { 00:09:58.046 "dma_device_id": "system", 00:09:58.046 "dma_device_type": 1 00:09:58.046 }, 00:09:58.046 { 00:09:58.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.046 "dma_device_type": 2 00:09:58.046 } 00:09:58.046 ], 00:09:58.046 "driver_specific": {} 00:09:58.046 } 00:09:58.046 ] 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.046 "name": "Existed_Raid", 00:09:58.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.046 "strip_size_kb": 0, 00:09:58.046 "state": "configuring", 00:09:58.046 "raid_level": "raid1", 00:09:58.046 "superblock": false, 00:09:58.046 "num_base_bdevs": 3, 00:09:58.046 "num_base_bdevs_discovered": 2, 00:09:58.046 "num_base_bdevs_operational": 3, 00:09:58.046 "base_bdevs_list": [ 00:09:58.046 { 00:09:58.046 "name": "BaseBdev1", 00:09:58.046 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:58.046 "is_configured": true, 00:09:58.046 "data_offset": 0, 00:09:58.046 "data_size": 65536 00:09:58.046 }, 00:09:58.046 { 00:09:58.046 "name": "BaseBdev2", 00:09:58.046 "uuid": "8dd1e99a-96ef-4e4b-8ecb-67f581c77089", 00:09:58.046 "is_configured": true, 00:09:58.046 "data_offset": 0, 00:09:58.046 "data_size": 65536 00:09:58.046 }, 00:09:58.046 { 00:09:58.046 "name": "BaseBdev3", 00:09:58.046 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.046 "is_configured": false, 00:09:58.046 "data_offset": 0, 00:09:58.046 "data_size": 0 00:09:58.046 } 00:09:58.046 ] 00:09:58.046 }' 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.046 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 [2024-12-12 05:48:05.980897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.615 [2024-12-12 05:48:05.981035] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:58.615 [2024-12-12 05:48:05.981065] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:58.615 [2024-12-12 05:48:05.981408] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:58.615 [2024-12-12 05:48:05.981661] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:58.615 [2024-12-12 05:48:05.981705] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:58.615 [2024-12-12 05:48:05.982027] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:58.615 BaseBdev3 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.615 05:48:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 [ 00:09:58.615 { 00:09:58.615 "name": "BaseBdev3", 00:09:58.615 "aliases": [ 00:09:58.615 "9091951c-f373-4603-a487-94d918da5462" 00:09:58.615 ], 00:09:58.615 "product_name": "Malloc disk", 00:09:58.615 "block_size": 512, 00:09:58.615 "num_blocks": 65536, 00:09:58.615 "uuid": "9091951c-f373-4603-a487-94d918da5462", 00:09:58.615 "assigned_rate_limits": { 00:09:58.615 "rw_ios_per_sec": 0, 00:09:58.615 "rw_mbytes_per_sec": 0, 00:09:58.615 "r_mbytes_per_sec": 0, 00:09:58.615 "w_mbytes_per_sec": 0 00:09:58.615 }, 00:09:58.615 "claimed": true, 00:09:58.615 "claim_type": "exclusive_write", 00:09:58.615 "zoned": false, 00:09:58.615 "supported_io_types": { 00:09:58.615 "read": true, 00:09:58.615 "write": true, 00:09:58.615 "unmap": true, 00:09:58.615 "flush": true, 00:09:58.615 "reset": true, 00:09:58.615 "nvme_admin": false, 00:09:58.615 "nvme_io": false, 00:09:58.615 "nvme_io_md": false, 00:09:58.615 "write_zeroes": true, 00:09:58.615 "zcopy": true, 00:09:58.615 "get_zone_info": false, 00:09:58.615 "zone_management": false, 00:09:58.615 "zone_append": false, 00:09:58.615 "compare": false, 00:09:58.615 "compare_and_write": false, 00:09:58.615 "abort": true, 00:09:58.615 "seek_hole": false, 00:09:58.615 "seek_data": false, 00:09:58.615 "copy": true, 00:09:58.615 "nvme_iov_md": false 00:09:58.615 }, 00:09:58.615 "memory_domains": [ 00:09:58.615 { 00:09:58.615 "dma_device_id": "system", 00:09:58.615 "dma_device_type": 1 00:09:58.615 }, 00:09:58.615 { 00:09:58.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.615 "dma_device_type": 2 00:09:58.615 } 00:09:58.615 ], 00:09:58.615 "driver_specific": {} 00:09:58.615 } 00:09:58.615 ] 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.615 "name": "Existed_Raid", 00:09:58.615 "uuid": "a047eb77-2c99-45cb-a6ff-393000fef78e", 00:09:58.615 "strip_size_kb": 0, 00:09:58.615 "state": "online", 00:09:58.615 "raid_level": "raid1", 00:09:58.615 "superblock": false, 00:09:58.615 "num_base_bdevs": 3, 00:09:58.615 "num_base_bdevs_discovered": 3, 00:09:58.615 "num_base_bdevs_operational": 3, 00:09:58.615 "base_bdevs_list": [ 00:09:58.615 { 00:09:58.615 "name": "BaseBdev1", 00:09:58.615 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:58.615 "is_configured": true, 00:09:58.615 "data_offset": 0, 00:09:58.615 "data_size": 65536 00:09:58.615 }, 00:09:58.615 { 00:09:58.615 "name": "BaseBdev2", 00:09:58.615 "uuid": "8dd1e99a-96ef-4e4b-8ecb-67f581c77089", 00:09:58.615 "is_configured": true, 00:09:58.615 "data_offset": 0, 00:09:58.615 "data_size": 65536 00:09:58.615 }, 00:09:58.615 { 00:09:58.615 "name": "BaseBdev3", 00:09:58.615 "uuid": "9091951c-f373-4603-a487-94d918da5462", 00:09:58.615 "is_configured": true, 00:09:58.615 "data_offset": 0, 00:09:58.615 "data_size": 65536 00:09:58.615 } 00:09:58.615 ] 00:09:58.615 }' 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.615 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.184 [2024-12-12 05:48:06.452415] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.184 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:59.184 "name": "Existed_Raid", 00:09:59.184 "aliases": [ 00:09:59.184 "a047eb77-2c99-45cb-a6ff-393000fef78e" 00:09:59.184 ], 00:09:59.184 "product_name": "Raid Volume", 00:09:59.184 "block_size": 512, 00:09:59.184 "num_blocks": 65536, 00:09:59.184 "uuid": "a047eb77-2c99-45cb-a6ff-393000fef78e", 00:09:59.184 "assigned_rate_limits": { 00:09:59.184 "rw_ios_per_sec": 0, 00:09:59.184 "rw_mbytes_per_sec": 0, 00:09:59.184 "r_mbytes_per_sec": 0, 00:09:59.184 "w_mbytes_per_sec": 0 00:09:59.184 }, 00:09:59.184 "claimed": false, 00:09:59.184 "zoned": false, 00:09:59.184 "supported_io_types": { 00:09:59.184 "read": true, 00:09:59.184 "write": true, 00:09:59.184 "unmap": false, 00:09:59.184 "flush": false, 00:09:59.184 "reset": true, 00:09:59.184 "nvme_admin": false, 00:09:59.184 "nvme_io": false, 00:09:59.185 "nvme_io_md": false, 00:09:59.185 "write_zeroes": true, 00:09:59.185 "zcopy": false, 00:09:59.185 "get_zone_info": false, 00:09:59.185 "zone_management": false, 00:09:59.185 "zone_append": false, 00:09:59.185 "compare": false, 00:09:59.185 "compare_and_write": false, 00:09:59.185 "abort": false, 00:09:59.185 "seek_hole": false, 00:09:59.185 "seek_data": false, 00:09:59.185 "copy": false, 00:09:59.185 "nvme_iov_md": false 00:09:59.185 }, 00:09:59.185 "memory_domains": [ 00:09:59.185 { 00:09:59.185 "dma_device_id": "system", 00:09:59.185 "dma_device_type": 1 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.185 "dma_device_type": 2 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "system", 00:09:59.185 "dma_device_type": 1 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.185 "dma_device_type": 2 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "system", 00:09:59.185 "dma_device_type": 1 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.185 "dma_device_type": 2 00:09:59.185 } 00:09:59.185 ], 00:09:59.185 "driver_specific": { 00:09:59.185 "raid": { 00:09:59.185 "uuid": "a047eb77-2c99-45cb-a6ff-393000fef78e", 00:09:59.185 "strip_size_kb": 0, 00:09:59.185 "state": "online", 00:09:59.185 "raid_level": "raid1", 00:09:59.185 "superblock": false, 00:09:59.185 "num_base_bdevs": 3, 00:09:59.185 "num_base_bdevs_discovered": 3, 00:09:59.185 "num_base_bdevs_operational": 3, 00:09:59.185 "base_bdevs_list": [ 00:09:59.185 { 00:09:59.185 "name": "BaseBdev1", 00:09:59.185 "uuid": "6f6e4bcd-fcde-4aab-97a2-bf8ddaaaf5c6", 00:09:59.185 "is_configured": true, 00:09:59.185 "data_offset": 0, 00:09:59.185 "data_size": 65536 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "name": "BaseBdev2", 00:09:59.185 "uuid": "8dd1e99a-96ef-4e4b-8ecb-67f581c77089", 00:09:59.185 "is_configured": true, 00:09:59.185 "data_offset": 0, 00:09:59.185 "data_size": 65536 00:09:59.185 }, 00:09:59.185 { 00:09:59.185 "name": "BaseBdev3", 00:09:59.185 "uuid": "9091951c-f373-4603-a487-94d918da5462", 00:09:59.185 "is_configured": true, 00:09:59.185 "data_offset": 0, 00:09:59.185 "data_size": 65536 00:09:59.185 } 00:09:59.185 ] 00:09:59.185 } 00:09:59.185 } 00:09:59.185 }' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:59.185 BaseBdev2 00:09:59.185 BaseBdev3' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.185 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.185 [2024-12-12 05:48:06.683778] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.445 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.445 "name": "Existed_Raid", 00:09:59.445 "uuid": "a047eb77-2c99-45cb-a6ff-393000fef78e", 00:09:59.445 "strip_size_kb": 0, 00:09:59.445 "state": "online", 00:09:59.445 "raid_level": "raid1", 00:09:59.445 "superblock": false, 00:09:59.445 "num_base_bdevs": 3, 00:09:59.445 "num_base_bdevs_discovered": 2, 00:09:59.445 "num_base_bdevs_operational": 2, 00:09:59.445 "base_bdevs_list": [ 00:09:59.445 { 00:09:59.445 "name": null, 00:09:59.445 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.445 "is_configured": false, 00:09:59.445 "data_offset": 0, 00:09:59.445 "data_size": 65536 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "name": "BaseBdev2", 00:09:59.445 "uuid": "8dd1e99a-96ef-4e4b-8ecb-67f581c77089", 00:09:59.445 "is_configured": true, 00:09:59.445 "data_offset": 0, 00:09:59.445 "data_size": 65536 00:09:59.445 }, 00:09:59.445 { 00:09:59.445 "name": "BaseBdev3", 00:09:59.445 "uuid": "9091951c-f373-4603-a487-94d918da5462", 00:09:59.446 "is_configured": true, 00:09:59.446 "data_offset": 0, 00:09:59.446 "data_size": 65536 00:09:59.446 } 00:09:59.446 ] 00:09:59.446 }' 00:09:59.446 05:48:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.446 05:48:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.705 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.966 [2024-12-12 05:48:07.244079] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.966 [2024-12-12 05:48:07.388559] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:59.966 [2024-12-12 05:48:07.388694] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.966 [2024-12-12 05:48:07.478594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.966 [2024-12-12 05:48:07.478728] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.966 [2024-12-12 05:48:07.478770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.966 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.226 BaseBdev2 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.226 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 [ 00:10:00.227 { 00:10:00.227 "name": "BaseBdev2", 00:10:00.227 "aliases": [ 00:10:00.227 "998ce644-d905-4fc9-b8b5-7698ce2922a9" 00:10:00.227 ], 00:10:00.227 "product_name": "Malloc disk", 00:10:00.227 "block_size": 512, 00:10:00.227 "num_blocks": 65536, 00:10:00.227 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:00.227 "assigned_rate_limits": { 00:10:00.227 "rw_ios_per_sec": 0, 00:10:00.227 "rw_mbytes_per_sec": 0, 00:10:00.227 "r_mbytes_per_sec": 0, 00:10:00.227 "w_mbytes_per_sec": 0 00:10:00.227 }, 00:10:00.227 "claimed": false, 00:10:00.227 "zoned": false, 00:10:00.227 "supported_io_types": { 00:10:00.227 "read": true, 00:10:00.227 "write": true, 00:10:00.227 "unmap": true, 00:10:00.227 "flush": true, 00:10:00.227 "reset": true, 00:10:00.227 "nvme_admin": false, 00:10:00.227 "nvme_io": false, 00:10:00.227 "nvme_io_md": false, 00:10:00.227 "write_zeroes": true, 00:10:00.227 "zcopy": true, 00:10:00.227 "get_zone_info": false, 00:10:00.227 "zone_management": false, 00:10:00.227 "zone_append": false, 00:10:00.227 "compare": false, 00:10:00.227 "compare_and_write": false, 00:10:00.227 "abort": true, 00:10:00.227 "seek_hole": false, 00:10:00.227 "seek_data": false, 00:10:00.227 "copy": true, 00:10:00.227 "nvme_iov_md": false 00:10:00.227 }, 00:10:00.227 "memory_domains": [ 00:10:00.227 { 00:10:00.227 "dma_device_id": "system", 00:10:00.227 "dma_device_type": 1 00:10:00.227 }, 00:10:00.227 { 00:10:00.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.227 "dma_device_type": 2 00:10:00.227 } 00:10:00.227 ], 00:10:00.227 "driver_specific": {} 00:10:00.227 } 00:10:00.227 ] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 BaseBdev3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 [ 00:10:00.227 { 00:10:00.227 "name": "BaseBdev3", 00:10:00.227 "aliases": [ 00:10:00.227 "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6" 00:10:00.227 ], 00:10:00.227 "product_name": "Malloc disk", 00:10:00.227 "block_size": 512, 00:10:00.227 "num_blocks": 65536, 00:10:00.227 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:00.227 "assigned_rate_limits": { 00:10:00.227 "rw_ios_per_sec": 0, 00:10:00.227 "rw_mbytes_per_sec": 0, 00:10:00.227 "r_mbytes_per_sec": 0, 00:10:00.227 "w_mbytes_per_sec": 0 00:10:00.227 }, 00:10:00.227 "claimed": false, 00:10:00.227 "zoned": false, 00:10:00.227 "supported_io_types": { 00:10:00.227 "read": true, 00:10:00.227 "write": true, 00:10:00.227 "unmap": true, 00:10:00.227 "flush": true, 00:10:00.227 "reset": true, 00:10:00.227 "nvme_admin": false, 00:10:00.227 "nvme_io": false, 00:10:00.227 "nvme_io_md": false, 00:10:00.227 "write_zeroes": true, 00:10:00.227 "zcopy": true, 00:10:00.227 "get_zone_info": false, 00:10:00.227 "zone_management": false, 00:10:00.227 "zone_append": false, 00:10:00.227 "compare": false, 00:10:00.227 "compare_and_write": false, 00:10:00.227 "abort": true, 00:10:00.227 "seek_hole": false, 00:10:00.227 "seek_data": false, 00:10:00.227 "copy": true, 00:10:00.227 "nvme_iov_md": false 00:10:00.227 }, 00:10:00.227 "memory_domains": [ 00:10:00.227 { 00:10:00.227 "dma_device_id": "system", 00:10:00.227 "dma_device_type": 1 00:10:00.227 }, 00:10:00.227 { 00:10:00.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:00.227 "dma_device_type": 2 00:10:00.227 } 00:10:00.227 ], 00:10:00.227 "driver_specific": {} 00:10:00.227 } 00:10:00.227 ] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 [2024-12-12 05:48:07.650206] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.227 [2024-12-12 05:48:07.650301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.227 [2024-12-12 05:48:07.650344] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:00.227 [2024-12-12 05:48:07.652184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.227 "name": "Existed_Raid", 00:10:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.227 "strip_size_kb": 0, 00:10:00.227 "state": "configuring", 00:10:00.227 "raid_level": "raid1", 00:10:00.227 "superblock": false, 00:10:00.227 "num_base_bdevs": 3, 00:10:00.227 "num_base_bdevs_discovered": 2, 00:10:00.227 "num_base_bdevs_operational": 3, 00:10:00.227 "base_bdevs_list": [ 00:10:00.227 { 00:10:00.227 "name": "BaseBdev1", 00:10:00.227 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.227 "is_configured": false, 00:10:00.227 "data_offset": 0, 00:10:00.227 "data_size": 0 00:10:00.227 }, 00:10:00.227 { 00:10:00.227 "name": "BaseBdev2", 00:10:00.227 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:00.227 "is_configured": true, 00:10:00.227 "data_offset": 0, 00:10:00.227 "data_size": 65536 00:10:00.227 }, 00:10:00.227 { 00:10:00.227 "name": "BaseBdev3", 00:10:00.227 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:00.227 "is_configured": true, 00:10:00.227 "data_offset": 0, 00:10:00.227 "data_size": 65536 00:10:00.227 } 00:10:00.227 ] 00:10:00.227 }' 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.227 05:48:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.797 [2024-12-12 05:48:08.037594] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.797 "name": "Existed_Raid", 00:10:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.797 "strip_size_kb": 0, 00:10:00.797 "state": "configuring", 00:10:00.797 "raid_level": "raid1", 00:10:00.797 "superblock": false, 00:10:00.797 "num_base_bdevs": 3, 00:10:00.797 "num_base_bdevs_discovered": 1, 00:10:00.797 "num_base_bdevs_operational": 3, 00:10:00.797 "base_bdevs_list": [ 00:10:00.797 { 00:10:00.797 "name": "BaseBdev1", 00:10:00.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.797 "is_configured": false, 00:10:00.797 "data_offset": 0, 00:10:00.797 "data_size": 0 00:10:00.797 }, 00:10:00.797 { 00:10:00.797 "name": null, 00:10:00.797 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:00.797 "is_configured": false, 00:10:00.797 "data_offset": 0, 00:10:00.797 "data_size": 65536 00:10:00.797 }, 00:10:00.797 { 00:10:00.797 "name": "BaseBdev3", 00:10:00.797 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:00.797 "is_configured": true, 00:10:00.797 "data_offset": 0, 00:10:00.797 "data_size": 65536 00:10:00.797 } 00:10:00.797 ] 00:10:00.797 }' 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.797 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 [2024-12-12 05:48:08.521559] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.057 BaseBdev1 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 [ 00:10:01.057 { 00:10:01.057 "name": "BaseBdev1", 00:10:01.057 "aliases": [ 00:10:01.057 "3a0d03f6-42c2-4342-b552-4a809f40c665" 00:10:01.057 ], 00:10:01.057 "product_name": "Malloc disk", 00:10:01.057 "block_size": 512, 00:10:01.057 "num_blocks": 65536, 00:10:01.057 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:01.057 "assigned_rate_limits": { 00:10:01.057 "rw_ios_per_sec": 0, 00:10:01.057 "rw_mbytes_per_sec": 0, 00:10:01.057 "r_mbytes_per_sec": 0, 00:10:01.057 "w_mbytes_per_sec": 0 00:10:01.057 }, 00:10:01.057 "claimed": true, 00:10:01.057 "claim_type": "exclusive_write", 00:10:01.057 "zoned": false, 00:10:01.057 "supported_io_types": { 00:10:01.057 "read": true, 00:10:01.057 "write": true, 00:10:01.057 "unmap": true, 00:10:01.057 "flush": true, 00:10:01.057 "reset": true, 00:10:01.057 "nvme_admin": false, 00:10:01.057 "nvme_io": false, 00:10:01.057 "nvme_io_md": false, 00:10:01.057 "write_zeroes": true, 00:10:01.057 "zcopy": true, 00:10:01.057 "get_zone_info": false, 00:10:01.057 "zone_management": false, 00:10:01.057 "zone_append": false, 00:10:01.057 "compare": false, 00:10:01.057 "compare_and_write": false, 00:10:01.057 "abort": true, 00:10:01.057 "seek_hole": false, 00:10:01.057 "seek_data": false, 00:10:01.057 "copy": true, 00:10:01.057 "nvme_iov_md": false 00:10:01.057 }, 00:10:01.057 "memory_domains": [ 00:10:01.057 { 00:10:01.057 "dma_device_id": "system", 00:10:01.057 "dma_device_type": 1 00:10:01.057 }, 00:10:01.057 { 00:10:01.057 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:01.057 "dma_device_type": 2 00:10:01.057 } 00:10:01.057 ], 00:10:01.057 "driver_specific": {} 00:10:01.057 } 00:10:01.057 ] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.057 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.317 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.317 "name": "Existed_Raid", 00:10:01.317 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.317 "strip_size_kb": 0, 00:10:01.317 "state": "configuring", 00:10:01.317 "raid_level": "raid1", 00:10:01.317 "superblock": false, 00:10:01.317 "num_base_bdevs": 3, 00:10:01.317 "num_base_bdevs_discovered": 2, 00:10:01.317 "num_base_bdevs_operational": 3, 00:10:01.317 "base_bdevs_list": [ 00:10:01.317 { 00:10:01.317 "name": "BaseBdev1", 00:10:01.317 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:01.317 "is_configured": true, 00:10:01.317 "data_offset": 0, 00:10:01.317 "data_size": 65536 00:10:01.317 }, 00:10:01.317 { 00:10:01.317 "name": null, 00:10:01.317 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:01.317 "is_configured": false, 00:10:01.317 "data_offset": 0, 00:10:01.317 "data_size": 65536 00:10:01.317 }, 00:10:01.317 { 00:10:01.317 "name": "BaseBdev3", 00:10:01.317 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:01.317 "is_configured": true, 00:10:01.317 "data_offset": 0, 00:10:01.317 "data_size": 65536 00:10:01.317 } 00:10:01.317 ] 00:10:01.317 }' 00:10:01.317 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.317 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.578 05:48:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.578 [2024-12-12 05:48:09.000766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.579 "name": "Existed_Raid", 00:10:01.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.579 "strip_size_kb": 0, 00:10:01.579 "state": "configuring", 00:10:01.579 "raid_level": "raid1", 00:10:01.579 "superblock": false, 00:10:01.579 "num_base_bdevs": 3, 00:10:01.579 "num_base_bdevs_discovered": 1, 00:10:01.579 "num_base_bdevs_operational": 3, 00:10:01.579 "base_bdevs_list": [ 00:10:01.579 { 00:10:01.579 "name": "BaseBdev1", 00:10:01.579 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:01.579 "is_configured": true, 00:10:01.579 "data_offset": 0, 00:10:01.579 "data_size": 65536 00:10:01.579 }, 00:10:01.579 { 00:10:01.579 "name": null, 00:10:01.579 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:01.579 "is_configured": false, 00:10:01.579 "data_offset": 0, 00:10:01.579 "data_size": 65536 00:10:01.579 }, 00:10:01.579 { 00:10:01.579 "name": null, 00:10:01.579 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:01.579 "is_configured": false, 00:10:01.579 "data_offset": 0, 00:10:01.579 "data_size": 65536 00:10:01.579 } 00:10:01.579 ] 00:10:01.579 }' 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.579 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 [2024-12-12 05:48:09.424085] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.154 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.154 "name": "Existed_Raid", 00:10:02.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.154 "strip_size_kb": 0, 00:10:02.154 "state": "configuring", 00:10:02.154 "raid_level": "raid1", 00:10:02.154 "superblock": false, 00:10:02.154 "num_base_bdevs": 3, 00:10:02.154 "num_base_bdevs_discovered": 2, 00:10:02.154 "num_base_bdevs_operational": 3, 00:10:02.154 "base_bdevs_list": [ 00:10:02.154 { 00:10:02.154 "name": "BaseBdev1", 00:10:02.154 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:02.154 "is_configured": true, 00:10:02.154 "data_offset": 0, 00:10:02.154 "data_size": 65536 00:10:02.154 }, 00:10:02.154 { 00:10:02.154 "name": null, 00:10:02.155 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:02.155 "is_configured": false, 00:10:02.155 "data_offset": 0, 00:10:02.155 "data_size": 65536 00:10:02.155 }, 00:10:02.155 { 00:10:02.155 "name": "BaseBdev3", 00:10:02.155 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:02.155 "is_configured": true, 00:10:02.155 "data_offset": 0, 00:10:02.155 "data_size": 65536 00:10:02.155 } 00:10:02.155 ] 00:10:02.155 }' 00:10:02.155 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.155 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.414 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.414 [2024-12-12 05:48:09.887373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.674 05:48:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.674 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.674 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.674 "name": "Existed_Raid", 00:10:02.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.674 "strip_size_kb": 0, 00:10:02.674 "state": "configuring", 00:10:02.674 "raid_level": "raid1", 00:10:02.674 "superblock": false, 00:10:02.674 "num_base_bdevs": 3, 00:10:02.674 "num_base_bdevs_discovered": 1, 00:10:02.674 "num_base_bdevs_operational": 3, 00:10:02.674 "base_bdevs_list": [ 00:10:02.674 { 00:10:02.674 "name": null, 00:10:02.674 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:02.674 "is_configured": false, 00:10:02.674 "data_offset": 0, 00:10:02.674 "data_size": 65536 00:10:02.674 }, 00:10:02.674 { 00:10:02.674 "name": null, 00:10:02.674 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:02.674 "is_configured": false, 00:10:02.674 "data_offset": 0, 00:10:02.674 "data_size": 65536 00:10:02.674 }, 00:10:02.674 { 00:10:02.674 "name": "BaseBdev3", 00:10:02.674 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:02.674 "is_configured": true, 00:10:02.674 "data_offset": 0, 00:10:02.674 "data_size": 65536 00:10:02.674 } 00:10:02.674 ] 00:10:02.674 }' 00:10:02.674 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.674 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.933 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.933 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.933 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.933 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 [2024-12-12 05:48:10.487455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.193 "name": "Existed_Raid", 00:10:03.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:03.193 "strip_size_kb": 0, 00:10:03.193 "state": "configuring", 00:10:03.193 "raid_level": "raid1", 00:10:03.193 "superblock": false, 00:10:03.193 "num_base_bdevs": 3, 00:10:03.193 "num_base_bdevs_discovered": 2, 00:10:03.193 "num_base_bdevs_operational": 3, 00:10:03.193 "base_bdevs_list": [ 00:10:03.193 { 00:10:03.193 "name": null, 00:10:03.193 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:03.193 "is_configured": false, 00:10:03.193 "data_offset": 0, 00:10:03.193 "data_size": 65536 00:10:03.193 }, 00:10:03.193 { 00:10:03.193 "name": "BaseBdev2", 00:10:03.193 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:03.193 "is_configured": true, 00:10:03.193 "data_offset": 0, 00:10:03.193 "data_size": 65536 00:10:03.193 }, 00:10:03.193 { 00:10:03.193 "name": "BaseBdev3", 00:10:03.193 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:03.193 "is_configured": true, 00:10:03.193 "data_offset": 0, 00:10:03.193 "data_size": 65536 00:10:03.193 } 00:10:03.193 ] 00:10:03.193 }' 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.193 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.453 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.713 05:48:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3a0d03f6-42c2-4342-b552-4a809f40c665 00:10:03.713 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.713 05:48:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.713 [2024-12-12 05:48:11.026871] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.713 [2024-12-12 05:48:11.026917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.713 [2024-12-12 05:48:11.026924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:03.713 [2024-12-12 05:48:11.027172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.713 [2024-12-12 05:48:11.027328] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.713 [2024-12-12 05:48:11.027339] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:03.713 [2024-12-12 05:48:11.027608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.713 NewBaseBdev 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.713 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.714 [ 00:10:03.714 { 00:10:03.714 "name": "NewBaseBdev", 00:10:03.714 "aliases": [ 00:10:03.714 "3a0d03f6-42c2-4342-b552-4a809f40c665" 00:10:03.714 ], 00:10:03.714 "product_name": "Malloc disk", 00:10:03.714 "block_size": 512, 00:10:03.714 "num_blocks": 65536, 00:10:03.714 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:03.714 "assigned_rate_limits": { 00:10:03.714 "rw_ios_per_sec": 0, 00:10:03.714 "rw_mbytes_per_sec": 0, 00:10:03.714 "r_mbytes_per_sec": 0, 00:10:03.714 "w_mbytes_per_sec": 0 00:10:03.714 }, 00:10:03.714 "claimed": true, 00:10:03.714 "claim_type": "exclusive_write", 00:10:03.714 "zoned": false, 00:10:03.714 "supported_io_types": { 00:10:03.714 "read": true, 00:10:03.714 "write": true, 00:10:03.714 "unmap": true, 00:10:03.714 "flush": true, 00:10:03.714 "reset": true, 00:10:03.714 "nvme_admin": false, 00:10:03.714 "nvme_io": false, 00:10:03.714 "nvme_io_md": false, 00:10:03.714 "write_zeroes": true, 00:10:03.714 "zcopy": true, 00:10:03.714 "get_zone_info": false, 00:10:03.714 "zone_management": false, 00:10:03.714 "zone_append": false, 00:10:03.714 "compare": false, 00:10:03.714 "compare_and_write": false, 00:10:03.714 "abort": true, 00:10:03.714 "seek_hole": false, 00:10:03.714 "seek_data": false, 00:10:03.714 "copy": true, 00:10:03.714 "nvme_iov_md": false 00:10:03.714 }, 00:10:03.714 "memory_domains": [ 00:10:03.714 { 00:10:03.714 "dma_device_id": "system", 00:10:03.714 "dma_device_type": 1 00:10:03.714 }, 00:10:03.714 { 00:10:03.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.714 "dma_device_type": 2 00:10:03.714 } 00:10:03.714 ], 00:10:03.714 "driver_specific": {} 00:10:03.714 } 00:10:03.714 ] 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.714 "name": "Existed_Raid", 00:10:03.714 "uuid": "e09a451d-ade2-42f9-b537-5baf07b69d0d", 00:10:03.714 "strip_size_kb": 0, 00:10:03.714 "state": "online", 00:10:03.714 "raid_level": "raid1", 00:10:03.714 "superblock": false, 00:10:03.714 "num_base_bdevs": 3, 00:10:03.714 "num_base_bdevs_discovered": 3, 00:10:03.714 "num_base_bdevs_operational": 3, 00:10:03.714 "base_bdevs_list": [ 00:10:03.714 { 00:10:03.714 "name": "NewBaseBdev", 00:10:03.714 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:03.714 "is_configured": true, 00:10:03.714 "data_offset": 0, 00:10:03.714 "data_size": 65536 00:10:03.714 }, 00:10:03.714 { 00:10:03.714 "name": "BaseBdev2", 00:10:03.714 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:03.714 "is_configured": true, 00:10:03.714 "data_offset": 0, 00:10:03.714 "data_size": 65536 00:10:03.714 }, 00:10:03.714 { 00:10:03.714 "name": "BaseBdev3", 00:10:03.714 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:03.714 "is_configured": true, 00:10:03.714 "data_offset": 0, 00:10:03.714 "data_size": 65536 00:10:03.714 } 00:10:03.714 ] 00:10:03.714 }' 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.714 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.973 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.973 [2024-12-12 05:48:11.478578] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.233 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.233 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:04.234 "name": "Existed_Raid", 00:10:04.234 "aliases": [ 00:10:04.234 "e09a451d-ade2-42f9-b537-5baf07b69d0d" 00:10:04.234 ], 00:10:04.234 "product_name": "Raid Volume", 00:10:04.234 "block_size": 512, 00:10:04.234 "num_blocks": 65536, 00:10:04.234 "uuid": "e09a451d-ade2-42f9-b537-5baf07b69d0d", 00:10:04.234 "assigned_rate_limits": { 00:10:04.234 "rw_ios_per_sec": 0, 00:10:04.234 "rw_mbytes_per_sec": 0, 00:10:04.234 "r_mbytes_per_sec": 0, 00:10:04.234 "w_mbytes_per_sec": 0 00:10:04.234 }, 00:10:04.234 "claimed": false, 00:10:04.234 "zoned": false, 00:10:04.234 "supported_io_types": { 00:10:04.234 "read": true, 00:10:04.234 "write": true, 00:10:04.234 "unmap": false, 00:10:04.234 "flush": false, 00:10:04.234 "reset": true, 00:10:04.234 "nvme_admin": false, 00:10:04.234 "nvme_io": false, 00:10:04.234 "nvme_io_md": false, 00:10:04.234 "write_zeroes": true, 00:10:04.234 "zcopy": false, 00:10:04.234 "get_zone_info": false, 00:10:04.234 "zone_management": false, 00:10:04.234 "zone_append": false, 00:10:04.234 "compare": false, 00:10:04.234 "compare_and_write": false, 00:10:04.234 "abort": false, 00:10:04.234 "seek_hole": false, 00:10:04.234 "seek_data": false, 00:10:04.234 "copy": false, 00:10:04.234 "nvme_iov_md": false 00:10:04.234 }, 00:10:04.234 "memory_domains": [ 00:10:04.234 { 00:10:04.234 "dma_device_id": "system", 00:10:04.234 "dma_device_type": 1 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.234 "dma_device_type": 2 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "dma_device_id": "system", 00:10:04.234 "dma_device_type": 1 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.234 "dma_device_type": 2 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "dma_device_id": "system", 00:10:04.234 "dma_device_type": 1 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.234 "dma_device_type": 2 00:10:04.234 } 00:10:04.234 ], 00:10:04.234 "driver_specific": { 00:10:04.234 "raid": { 00:10:04.234 "uuid": "e09a451d-ade2-42f9-b537-5baf07b69d0d", 00:10:04.234 "strip_size_kb": 0, 00:10:04.234 "state": "online", 00:10:04.234 "raid_level": "raid1", 00:10:04.234 "superblock": false, 00:10:04.234 "num_base_bdevs": 3, 00:10:04.234 "num_base_bdevs_discovered": 3, 00:10:04.234 "num_base_bdevs_operational": 3, 00:10:04.234 "base_bdevs_list": [ 00:10:04.234 { 00:10:04.234 "name": "NewBaseBdev", 00:10:04.234 "uuid": "3a0d03f6-42c2-4342-b552-4a809f40c665", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "name": "BaseBdev2", 00:10:04.234 "uuid": "998ce644-d905-4fc9-b8b5-7698ce2922a9", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 }, 00:10:04.234 { 00:10:04.234 "name": "BaseBdev3", 00:10:04.234 "uuid": "1d1a7f9a-e24e-4a6f-80e3-14512e909cd6", 00:10:04.234 "is_configured": true, 00:10:04.234 "data_offset": 0, 00:10:04.234 "data_size": 65536 00:10:04.234 } 00:10:04.234 ] 00:10:04.234 } 00:10:04.234 } 00:10:04.234 }' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:04.234 BaseBdev2 00:10:04.234 BaseBdev3' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.234 [2024-12-12 05:48:11.717826] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:04.234 [2024-12-12 05:48:11.717856] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.234 [2024-12-12 05:48:11.717926] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.234 [2024-12-12 05:48:11.718194] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.234 [2024-12-12 05:48:11.718203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 68337 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 68337 ']' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 68337 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.234 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68337 00:10:04.494 killing process with pid 68337 00:10:04.494 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.494 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.494 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68337' 00:10:04.494 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 68337 00:10:04.494 [2024-12-12 05:48:11.760543] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:04.494 05:48:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 68337 00:10:04.753 [2024-12-12 05:48:12.053859] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.692 00:10:05.692 real 0m9.998s 00:10:05.692 user 0m15.964s 00:10:05.692 sys 0m1.649s 00:10:05.692 ************************************ 00:10:05.692 END TEST raid_state_function_test 00:10:05.692 ************************************ 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.692 05:48:13 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:05.692 05:48:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.692 05:48:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.692 05:48:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.692 ************************************ 00:10:05.692 START TEST raid_state_function_test_sb 00:10:05.692 ************************************ 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:05.692 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:05.693 Process raid pid: 68958 00:10:05.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68958 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68958' 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68958 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 68958 ']' 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.693 05:48:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.951 [2024-12-12 05:48:13.280678] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:05.951 [2024-12-12 05:48:13.280899] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.951 [2024-12-12 05:48:13.452531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.211 [2024-12-12 05:48:13.557266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.471 [2024-12-12 05:48:13.751067] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.471 [2024-12-12 05:48:13.751152] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.731 [2024-12-12 05:48:14.102044] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.731 [2024-12-12 05:48:14.102155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.731 [2024-12-12 05:48:14.102186] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.731 [2024-12-12 05:48:14.102209] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.731 [2024-12-12 05:48:14.102233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.731 [2024-12-12 05:48:14.102253] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.731 "name": "Existed_Raid", 00:10:06.731 "uuid": "36e2fea8-01fc-4fed-a1b0-795611961dbe", 00:10:06.731 "strip_size_kb": 0, 00:10:06.731 "state": "configuring", 00:10:06.731 "raid_level": "raid1", 00:10:06.731 "superblock": true, 00:10:06.731 "num_base_bdevs": 3, 00:10:06.731 "num_base_bdevs_discovered": 0, 00:10:06.731 "num_base_bdevs_operational": 3, 00:10:06.731 "base_bdevs_list": [ 00:10:06.731 { 00:10:06.731 "name": "BaseBdev1", 00:10:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.731 "is_configured": false, 00:10:06.731 "data_offset": 0, 00:10:06.731 "data_size": 0 00:10:06.731 }, 00:10:06.731 { 00:10:06.731 "name": "BaseBdev2", 00:10:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.731 "is_configured": false, 00:10:06.731 "data_offset": 0, 00:10:06.731 "data_size": 0 00:10:06.731 }, 00:10:06.731 { 00:10:06.731 "name": "BaseBdev3", 00:10:06.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.731 "is_configured": false, 00:10:06.731 "data_offset": 0, 00:10:06.731 "data_size": 0 00:10:06.731 } 00:10:06.731 ] 00:10:06.731 }' 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.731 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 [2024-12-12 05:48:14.561242] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.302 [2024-12-12 05:48:14.561279] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 [2024-12-12 05:48:14.569225] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:07.302 [2024-12-12 05:48:14.569308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:07.302 [2024-12-12 05:48:14.569322] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.302 [2024-12-12 05:48:14.569332] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.302 [2024-12-12 05:48:14.569338] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.302 [2024-12-12 05:48:14.569346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 BaseBdev1 00:10:07.302 [2024-12-12 05:48:14.611022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.302 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.302 [ 00:10:07.302 { 00:10:07.302 "name": "BaseBdev1", 00:10:07.302 "aliases": [ 00:10:07.302 "1fe9145e-e030-447b-8a0b-b2f697d8484b" 00:10:07.302 ], 00:10:07.302 "product_name": "Malloc disk", 00:10:07.302 "block_size": 512, 00:10:07.302 "num_blocks": 65536, 00:10:07.302 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:07.302 "assigned_rate_limits": { 00:10:07.302 "rw_ios_per_sec": 0, 00:10:07.302 "rw_mbytes_per_sec": 0, 00:10:07.302 "r_mbytes_per_sec": 0, 00:10:07.302 "w_mbytes_per_sec": 0 00:10:07.302 }, 00:10:07.302 "claimed": true, 00:10:07.302 "claim_type": "exclusive_write", 00:10:07.302 "zoned": false, 00:10:07.302 "supported_io_types": { 00:10:07.302 "read": true, 00:10:07.302 "write": true, 00:10:07.302 "unmap": true, 00:10:07.302 "flush": true, 00:10:07.302 "reset": true, 00:10:07.302 "nvme_admin": false, 00:10:07.302 "nvme_io": false, 00:10:07.302 "nvme_io_md": false, 00:10:07.302 "write_zeroes": true, 00:10:07.303 "zcopy": true, 00:10:07.303 "get_zone_info": false, 00:10:07.303 "zone_management": false, 00:10:07.303 "zone_append": false, 00:10:07.303 "compare": false, 00:10:07.303 "compare_and_write": false, 00:10:07.303 "abort": true, 00:10:07.303 "seek_hole": false, 00:10:07.303 "seek_data": false, 00:10:07.303 "copy": true, 00:10:07.303 "nvme_iov_md": false 00:10:07.303 }, 00:10:07.303 "memory_domains": [ 00:10:07.303 { 00:10:07.303 "dma_device_id": "system", 00:10:07.303 "dma_device_type": 1 00:10:07.303 }, 00:10:07.303 { 00:10:07.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.303 "dma_device_type": 2 00:10:07.303 } 00:10:07.303 ], 00:10:07.303 "driver_specific": {} 00:10:07.303 } 00:10:07.303 ] 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.303 "name": "Existed_Raid", 00:10:07.303 "uuid": "6993d6c3-ab90-4ab6-b716-e41e73bfd993", 00:10:07.303 "strip_size_kb": 0, 00:10:07.303 "state": "configuring", 00:10:07.303 "raid_level": "raid1", 00:10:07.303 "superblock": true, 00:10:07.303 "num_base_bdevs": 3, 00:10:07.303 "num_base_bdevs_discovered": 1, 00:10:07.303 "num_base_bdevs_operational": 3, 00:10:07.303 "base_bdevs_list": [ 00:10:07.303 { 00:10:07.303 "name": "BaseBdev1", 00:10:07.303 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:07.303 "is_configured": true, 00:10:07.303 "data_offset": 2048, 00:10:07.303 "data_size": 63488 00:10:07.303 }, 00:10:07.303 { 00:10:07.303 "name": "BaseBdev2", 00:10:07.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.303 "is_configured": false, 00:10:07.303 "data_offset": 0, 00:10:07.303 "data_size": 0 00:10:07.303 }, 00:10:07.303 { 00:10:07.303 "name": "BaseBdev3", 00:10:07.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.303 "is_configured": false, 00:10:07.303 "data_offset": 0, 00:10:07.303 "data_size": 0 00:10:07.303 } 00:10:07.303 ] 00:10:07.303 }' 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.303 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.563 [2024-12-12 05:48:14.990412] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.563 [2024-12-12 05:48:14.990509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.563 05:48:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.563 [2024-12-12 05:48:14.998445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.563 [2024-12-12 05:48:15.000300] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.563 [2024-12-12 05:48:15.000379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.563 [2024-12-12 05:48:15.000393] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.563 [2024-12-12 05:48:15.000402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.563 "name": "Existed_Raid", 00:10:07.563 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:07.563 "strip_size_kb": 0, 00:10:07.563 "state": "configuring", 00:10:07.563 "raid_level": "raid1", 00:10:07.563 "superblock": true, 00:10:07.563 "num_base_bdevs": 3, 00:10:07.563 "num_base_bdevs_discovered": 1, 00:10:07.563 "num_base_bdevs_operational": 3, 00:10:07.563 "base_bdevs_list": [ 00:10:07.563 { 00:10:07.563 "name": "BaseBdev1", 00:10:07.563 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:07.563 "is_configured": true, 00:10:07.563 "data_offset": 2048, 00:10:07.563 "data_size": 63488 00:10:07.563 }, 00:10:07.563 { 00:10:07.563 "name": "BaseBdev2", 00:10:07.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.563 "is_configured": false, 00:10:07.563 "data_offset": 0, 00:10:07.563 "data_size": 0 00:10:07.563 }, 00:10:07.563 { 00:10:07.563 "name": "BaseBdev3", 00:10:07.563 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.563 "is_configured": false, 00:10:07.563 "data_offset": 0, 00:10:07.563 "data_size": 0 00:10:07.563 } 00:10:07.563 ] 00:10:07.563 }' 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.563 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.133 BaseBdev2 00:10:08.133 [2024-12-12 05:48:15.474697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.133 [ 00:10:08.133 { 00:10:08.133 "name": "BaseBdev2", 00:10:08.133 "aliases": [ 00:10:08.133 "26e85dcc-ffa9-4e5e-8c97-67aa7596af57" 00:10:08.133 ], 00:10:08.133 "product_name": "Malloc disk", 00:10:08.133 "block_size": 512, 00:10:08.133 "num_blocks": 65536, 00:10:08.133 "uuid": "26e85dcc-ffa9-4e5e-8c97-67aa7596af57", 00:10:08.133 "assigned_rate_limits": { 00:10:08.133 "rw_ios_per_sec": 0, 00:10:08.133 "rw_mbytes_per_sec": 0, 00:10:08.133 "r_mbytes_per_sec": 0, 00:10:08.133 "w_mbytes_per_sec": 0 00:10:08.133 }, 00:10:08.133 "claimed": true, 00:10:08.133 "claim_type": "exclusive_write", 00:10:08.133 "zoned": false, 00:10:08.133 "supported_io_types": { 00:10:08.133 "read": true, 00:10:08.133 "write": true, 00:10:08.133 "unmap": true, 00:10:08.133 "flush": true, 00:10:08.133 "reset": true, 00:10:08.133 "nvme_admin": false, 00:10:08.133 "nvme_io": false, 00:10:08.133 "nvme_io_md": false, 00:10:08.133 "write_zeroes": true, 00:10:08.133 "zcopy": true, 00:10:08.133 "get_zone_info": false, 00:10:08.133 "zone_management": false, 00:10:08.133 "zone_append": false, 00:10:08.133 "compare": false, 00:10:08.133 "compare_and_write": false, 00:10:08.133 "abort": true, 00:10:08.133 "seek_hole": false, 00:10:08.133 "seek_data": false, 00:10:08.133 "copy": true, 00:10:08.133 "nvme_iov_md": false 00:10:08.133 }, 00:10:08.133 "memory_domains": [ 00:10:08.133 { 00:10:08.133 "dma_device_id": "system", 00:10:08.133 "dma_device_type": 1 00:10:08.133 }, 00:10:08.133 { 00:10:08.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.133 "dma_device_type": 2 00:10:08.133 } 00:10:08.133 ], 00:10:08.133 "driver_specific": {} 00:10:08.133 } 00:10:08.133 ] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.133 "name": "Existed_Raid", 00:10:08.133 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:08.133 "strip_size_kb": 0, 00:10:08.133 "state": "configuring", 00:10:08.133 "raid_level": "raid1", 00:10:08.133 "superblock": true, 00:10:08.133 "num_base_bdevs": 3, 00:10:08.133 "num_base_bdevs_discovered": 2, 00:10:08.133 "num_base_bdevs_operational": 3, 00:10:08.133 "base_bdevs_list": [ 00:10:08.133 { 00:10:08.133 "name": "BaseBdev1", 00:10:08.133 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:08.133 "is_configured": true, 00:10:08.133 "data_offset": 2048, 00:10:08.133 "data_size": 63488 00:10:08.133 }, 00:10:08.133 { 00:10:08.133 "name": "BaseBdev2", 00:10:08.133 "uuid": "26e85dcc-ffa9-4e5e-8c97-67aa7596af57", 00:10:08.133 "is_configured": true, 00:10:08.133 "data_offset": 2048, 00:10:08.133 "data_size": 63488 00:10:08.133 }, 00:10:08.133 { 00:10:08.133 "name": "BaseBdev3", 00:10:08.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.133 "is_configured": false, 00:10:08.133 "data_offset": 0, 00:10:08.133 "data_size": 0 00:10:08.133 } 00:10:08.133 ] 00:10:08.133 }' 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.133 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.409 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.409 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.409 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.674 [2024-12-12 05:48:15.969097] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.674 [2024-12-12 05:48:15.969491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.674 [2024-12-12 05:48:15.969580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:08.674 [2024-12-12 05:48:15.969898] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.674 BaseBdev3 00:10:08.674 [2024-12-12 05:48:15.970158] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.674 [2024-12-12 05:48:15.970216] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.674 [2024-12-12 05:48:15.970513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.674 [ 00:10:08.674 { 00:10:08.674 "name": "BaseBdev3", 00:10:08.674 "aliases": [ 00:10:08.674 "bbeb36fa-4454-4e19-8cb0-5449d48b9e02" 00:10:08.674 ], 00:10:08.674 "product_name": "Malloc disk", 00:10:08.674 "block_size": 512, 00:10:08.674 "num_blocks": 65536, 00:10:08.674 "uuid": "bbeb36fa-4454-4e19-8cb0-5449d48b9e02", 00:10:08.674 "assigned_rate_limits": { 00:10:08.674 "rw_ios_per_sec": 0, 00:10:08.674 "rw_mbytes_per_sec": 0, 00:10:08.674 "r_mbytes_per_sec": 0, 00:10:08.674 "w_mbytes_per_sec": 0 00:10:08.674 }, 00:10:08.674 "claimed": true, 00:10:08.674 "claim_type": "exclusive_write", 00:10:08.674 "zoned": false, 00:10:08.674 "supported_io_types": { 00:10:08.674 "read": true, 00:10:08.674 "write": true, 00:10:08.674 "unmap": true, 00:10:08.674 "flush": true, 00:10:08.674 "reset": true, 00:10:08.674 "nvme_admin": false, 00:10:08.674 "nvme_io": false, 00:10:08.674 "nvme_io_md": false, 00:10:08.674 "write_zeroes": true, 00:10:08.674 "zcopy": true, 00:10:08.674 "get_zone_info": false, 00:10:08.674 "zone_management": false, 00:10:08.674 "zone_append": false, 00:10:08.674 "compare": false, 00:10:08.674 "compare_and_write": false, 00:10:08.674 "abort": true, 00:10:08.674 "seek_hole": false, 00:10:08.674 "seek_data": false, 00:10:08.674 "copy": true, 00:10:08.674 "nvme_iov_md": false 00:10:08.674 }, 00:10:08.674 "memory_domains": [ 00:10:08.674 { 00:10:08.674 "dma_device_id": "system", 00:10:08.674 "dma_device_type": 1 00:10:08.674 }, 00:10:08.674 { 00:10:08.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.674 "dma_device_type": 2 00:10:08.674 } 00:10:08.674 ], 00:10:08.674 "driver_specific": {} 00:10:08.674 } 00:10:08.674 ] 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.674 05:48:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.674 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.674 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.674 "name": "Existed_Raid", 00:10:08.674 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:08.674 "strip_size_kb": 0, 00:10:08.674 "state": "online", 00:10:08.674 "raid_level": "raid1", 00:10:08.674 "superblock": true, 00:10:08.674 "num_base_bdevs": 3, 00:10:08.674 "num_base_bdevs_discovered": 3, 00:10:08.674 "num_base_bdevs_operational": 3, 00:10:08.674 "base_bdevs_list": [ 00:10:08.674 { 00:10:08.674 "name": "BaseBdev1", 00:10:08.674 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:08.674 "is_configured": true, 00:10:08.674 "data_offset": 2048, 00:10:08.674 "data_size": 63488 00:10:08.674 }, 00:10:08.674 { 00:10:08.674 "name": "BaseBdev2", 00:10:08.674 "uuid": "26e85dcc-ffa9-4e5e-8c97-67aa7596af57", 00:10:08.674 "is_configured": true, 00:10:08.674 "data_offset": 2048, 00:10:08.674 "data_size": 63488 00:10:08.674 }, 00:10:08.674 { 00:10:08.674 "name": "BaseBdev3", 00:10:08.674 "uuid": "bbeb36fa-4454-4e19-8cb0-5449d48b9e02", 00:10:08.674 "is_configured": true, 00:10:08.674 "data_offset": 2048, 00:10:08.674 "data_size": 63488 00:10:08.674 } 00:10:08.674 ] 00:10:08.674 }' 00:10:08.674 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.674 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.934 [2024-12-12 05:48:16.416751] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.934 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.194 "name": "Existed_Raid", 00:10:09.194 "aliases": [ 00:10:09.194 "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d" 00:10:09.194 ], 00:10:09.194 "product_name": "Raid Volume", 00:10:09.194 "block_size": 512, 00:10:09.194 "num_blocks": 63488, 00:10:09.194 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:09.194 "assigned_rate_limits": { 00:10:09.194 "rw_ios_per_sec": 0, 00:10:09.194 "rw_mbytes_per_sec": 0, 00:10:09.194 "r_mbytes_per_sec": 0, 00:10:09.194 "w_mbytes_per_sec": 0 00:10:09.194 }, 00:10:09.194 "claimed": false, 00:10:09.194 "zoned": false, 00:10:09.194 "supported_io_types": { 00:10:09.194 "read": true, 00:10:09.194 "write": true, 00:10:09.194 "unmap": false, 00:10:09.194 "flush": false, 00:10:09.194 "reset": true, 00:10:09.194 "nvme_admin": false, 00:10:09.194 "nvme_io": false, 00:10:09.194 "nvme_io_md": false, 00:10:09.194 "write_zeroes": true, 00:10:09.194 "zcopy": false, 00:10:09.194 "get_zone_info": false, 00:10:09.194 "zone_management": false, 00:10:09.194 "zone_append": false, 00:10:09.194 "compare": false, 00:10:09.194 "compare_and_write": false, 00:10:09.194 "abort": false, 00:10:09.194 "seek_hole": false, 00:10:09.194 "seek_data": false, 00:10:09.194 "copy": false, 00:10:09.194 "nvme_iov_md": false 00:10:09.194 }, 00:10:09.194 "memory_domains": [ 00:10:09.194 { 00:10:09.194 "dma_device_id": "system", 00:10:09.194 "dma_device_type": 1 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.194 "dma_device_type": 2 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "dma_device_id": "system", 00:10:09.194 "dma_device_type": 1 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.194 "dma_device_type": 2 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "dma_device_id": "system", 00:10:09.194 "dma_device_type": 1 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.194 "dma_device_type": 2 00:10:09.194 } 00:10:09.194 ], 00:10:09.194 "driver_specific": { 00:10:09.194 "raid": { 00:10:09.194 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:09.194 "strip_size_kb": 0, 00:10:09.194 "state": "online", 00:10:09.194 "raid_level": "raid1", 00:10:09.194 "superblock": true, 00:10:09.194 "num_base_bdevs": 3, 00:10:09.194 "num_base_bdevs_discovered": 3, 00:10:09.194 "num_base_bdevs_operational": 3, 00:10:09.194 "base_bdevs_list": [ 00:10:09.194 { 00:10:09.194 "name": "BaseBdev1", 00:10:09.194 "uuid": "1fe9145e-e030-447b-8a0b-b2f697d8484b", 00:10:09.194 "is_configured": true, 00:10:09.194 "data_offset": 2048, 00:10:09.194 "data_size": 63488 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "name": "BaseBdev2", 00:10:09.194 "uuid": "26e85dcc-ffa9-4e5e-8c97-67aa7596af57", 00:10:09.194 "is_configured": true, 00:10:09.194 "data_offset": 2048, 00:10:09.194 "data_size": 63488 00:10:09.194 }, 00:10:09.194 { 00:10:09.194 "name": "BaseBdev3", 00:10:09.194 "uuid": "bbeb36fa-4454-4e19-8cb0-5449d48b9e02", 00:10:09.194 "is_configured": true, 00:10:09.194 "data_offset": 2048, 00:10:09.194 "data_size": 63488 00:10:09.194 } 00:10:09.194 ] 00:10:09.194 } 00:10:09.194 } 00:10:09.194 }' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:09.194 BaseBdev2 00:10:09.194 BaseBdev3' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.194 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.195 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.195 [2024-12-12 05:48:16.707958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:09.454 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.455 "name": "Existed_Raid", 00:10:09.455 "uuid": "cb8903b8-1e8d-4e1e-be26-fcd61d3cda8d", 00:10:09.455 "strip_size_kb": 0, 00:10:09.455 "state": "online", 00:10:09.455 "raid_level": "raid1", 00:10:09.455 "superblock": true, 00:10:09.455 "num_base_bdevs": 3, 00:10:09.455 "num_base_bdevs_discovered": 2, 00:10:09.455 "num_base_bdevs_operational": 2, 00:10:09.455 "base_bdevs_list": [ 00:10:09.455 { 00:10:09.455 "name": null, 00:10:09.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.455 "is_configured": false, 00:10:09.455 "data_offset": 0, 00:10:09.455 "data_size": 63488 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "name": "BaseBdev2", 00:10:09.455 "uuid": "26e85dcc-ffa9-4e5e-8c97-67aa7596af57", 00:10:09.455 "is_configured": true, 00:10:09.455 "data_offset": 2048, 00:10:09.455 "data_size": 63488 00:10:09.455 }, 00:10:09.455 { 00:10:09.455 "name": "BaseBdev3", 00:10:09.455 "uuid": "bbeb36fa-4454-4e19-8cb0-5449d48b9e02", 00:10:09.455 "is_configured": true, 00:10:09.455 "data_offset": 2048, 00:10:09.455 "data_size": 63488 00:10:09.455 } 00:10:09.455 ] 00:10:09.455 }' 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.455 05:48:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.714 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 [2024-12-12 05:48:17.247107] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.974 [2024-12-12 05:48:17.396154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:09.974 [2024-12-12 05:48:17.396298] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.974 [2024-12-12 05:48:17.491271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.974 [2024-12-12 05:48:17.491370] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:09.974 [2024-12-12 05:48:17.491414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:09.974 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 BaseBdev2 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 [ 00:10:10.235 { 00:10:10.235 "name": "BaseBdev2", 00:10:10.235 "aliases": [ 00:10:10.235 "3964104b-5860-40aa-9eeb-fb36d7569eb1" 00:10:10.235 ], 00:10:10.235 "product_name": "Malloc disk", 00:10:10.235 "block_size": 512, 00:10:10.235 "num_blocks": 65536, 00:10:10.235 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:10.235 "assigned_rate_limits": { 00:10:10.235 "rw_ios_per_sec": 0, 00:10:10.235 "rw_mbytes_per_sec": 0, 00:10:10.235 "r_mbytes_per_sec": 0, 00:10:10.235 "w_mbytes_per_sec": 0 00:10:10.235 }, 00:10:10.235 "claimed": false, 00:10:10.235 "zoned": false, 00:10:10.235 "supported_io_types": { 00:10:10.235 "read": true, 00:10:10.235 "write": true, 00:10:10.235 "unmap": true, 00:10:10.235 "flush": true, 00:10:10.235 "reset": true, 00:10:10.235 "nvme_admin": false, 00:10:10.235 "nvme_io": false, 00:10:10.235 "nvme_io_md": false, 00:10:10.235 "write_zeroes": true, 00:10:10.235 "zcopy": true, 00:10:10.235 "get_zone_info": false, 00:10:10.235 "zone_management": false, 00:10:10.235 "zone_append": false, 00:10:10.235 "compare": false, 00:10:10.235 "compare_and_write": false, 00:10:10.235 "abort": true, 00:10:10.235 "seek_hole": false, 00:10:10.235 "seek_data": false, 00:10:10.235 "copy": true, 00:10:10.235 "nvme_iov_md": false 00:10:10.235 }, 00:10:10.235 "memory_domains": [ 00:10:10.235 { 00:10:10.235 "dma_device_id": "system", 00:10:10.235 "dma_device_type": 1 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.235 "dma_device_type": 2 00:10:10.235 } 00:10:10.235 ], 00:10:10.235 "driver_specific": {} 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 BaseBdev3 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.235 [ 00:10:10.235 { 00:10:10.235 "name": "BaseBdev3", 00:10:10.235 "aliases": [ 00:10:10.235 "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d" 00:10:10.235 ], 00:10:10.235 "product_name": "Malloc disk", 00:10:10.235 "block_size": 512, 00:10:10.235 "num_blocks": 65536, 00:10:10.235 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:10.235 "assigned_rate_limits": { 00:10:10.235 "rw_ios_per_sec": 0, 00:10:10.235 "rw_mbytes_per_sec": 0, 00:10:10.235 "r_mbytes_per_sec": 0, 00:10:10.235 "w_mbytes_per_sec": 0 00:10:10.235 }, 00:10:10.235 "claimed": false, 00:10:10.235 "zoned": false, 00:10:10.235 "supported_io_types": { 00:10:10.235 "read": true, 00:10:10.235 "write": true, 00:10:10.235 "unmap": true, 00:10:10.235 "flush": true, 00:10:10.235 "reset": true, 00:10:10.235 "nvme_admin": false, 00:10:10.235 "nvme_io": false, 00:10:10.235 "nvme_io_md": false, 00:10:10.235 "write_zeroes": true, 00:10:10.235 "zcopy": true, 00:10:10.235 "get_zone_info": false, 00:10:10.235 "zone_management": false, 00:10:10.235 "zone_append": false, 00:10:10.235 "compare": false, 00:10:10.235 "compare_and_write": false, 00:10:10.235 "abort": true, 00:10:10.235 "seek_hole": false, 00:10:10.235 "seek_data": false, 00:10:10.235 "copy": true, 00:10:10.235 "nvme_iov_md": false 00:10:10.235 }, 00:10:10.235 "memory_domains": [ 00:10:10.235 { 00:10:10.235 "dma_device_id": "system", 00:10:10.235 "dma_device_type": 1 00:10:10.235 }, 00:10:10.235 { 00:10:10.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.235 "dma_device_type": 2 00:10:10.235 } 00:10:10.235 ], 00:10:10.235 "driver_specific": {} 00:10:10.235 } 00:10:10.235 ] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.235 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.236 [2024-12-12 05:48:17.667133] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.236 [2024-12-12 05:48:17.667232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.236 [2024-12-12 05:48:17.667270] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.236 [2024-12-12 05:48:17.669068] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.236 "name": "Existed_Raid", 00:10:10.236 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:10.236 "strip_size_kb": 0, 00:10:10.236 "state": "configuring", 00:10:10.236 "raid_level": "raid1", 00:10:10.236 "superblock": true, 00:10:10.236 "num_base_bdevs": 3, 00:10:10.236 "num_base_bdevs_discovered": 2, 00:10:10.236 "num_base_bdevs_operational": 3, 00:10:10.236 "base_bdevs_list": [ 00:10:10.236 { 00:10:10.236 "name": "BaseBdev1", 00:10:10.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.236 "is_configured": false, 00:10:10.236 "data_offset": 0, 00:10:10.236 "data_size": 0 00:10:10.236 }, 00:10:10.236 { 00:10:10.236 "name": "BaseBdev2", 00:10:10.236 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:10.236 "is_configured": true, 00:10:10.236 "data_offset": 2048, 00:10:10.236 "data_size": 63488 00:10:10.236 }, 00:10:10.236 { 00:10:10.236 "name": "BaseBdev3", 00:10:10.236 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:10.236 "is_configured": true, 00:10:10.236 "data_offset": 2048, 00:10:10.236 "data_size": 63488 00:10:10.236 } 00:10:10.236 ] 00:10:10.236 }' 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.236 05:48:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.805 [2024-12-12 05:48:18.054551] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.805 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.806 "name": "Existed_Raid", 00:10:10.806 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:10.806 "strip_size_kb": 0, 00:10:10.806 "state": "configuring", 00:10:10.806 "raid_level": "raid1", 00:10:10.806 "superblock": true, 00:10:10.806 "num_base_bdevs": 3, 00:10:10.806 "num_base_bdevs_discovered": 1, 00:10:10.806 "num_base_bdevs_operational": 3, 00:10:10.806 "base_bdevs_list": [ 00:10:10.806 { 00:10:10.806 "name": "BaseBdev1", 00:10:10.806 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.806 "is_configured": false, 00:10:10.806 "data_offset": 0, 00:10:10.806 "data_size": 0 00:10:10.806 }, 00:10:10.806 { 00:10:10.806 "name": null, 00:10:10.806 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:10.806 "is_configured": false, 00:10:10.806 "data_offset": 0, 00:10:10.806 "data_size": 63488 00:10:10.806 }, 00:10:10.806 { 00:10:10.806 "name": "BaseBdev3", 00:10:10.806 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:10.806 "is_configured": true, 00:10:10.806 "data_offset": 2048, 00:10:10.806 "data_size": 63488 00:10:10.806 } 00:10:10.806 ] 00:10:10.806 }' 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.806 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 [2024-12-12 05:48:18.561370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.066 BaseBdev1 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.066 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.066 [ 00:10:11.066 { 00:10:11.066 "name": "BaseBdev1", 00:10:11.066 "aliases": [ 00:10:11.066 "a8738d36-b75a-4aa4-8c0d-a5d04d15026b" 00:10:11.066 ], 00:10:11.326 "product_name": "Malloc disk", 00:10:11.326 "block_size": 512, 00:10:11.326 "num_blocks": 65536, 00:10:11.326 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:11.326 "assigned_rate_limits": { 00:10:11.326 "rw_ios_per_sec": 0, 00:10:11.326 "rw_mbytes_per_sec": 0, 00:10:11.326 "r_mbytes_per_sec": 0, 00:10:11.326 "w_mbytes_per_sec": 0 00:10:11.326 }, 00:10:11.326 "claimed": true, 00:10:11.326 "claim_type": "exclusive_write", 00:10:11.326 "zoned": false, 00:10:11.326 "supported_io_types": { 00:10:11.326 "read": true, 00:10:11.326 "write": true, 00:10:11.326 "unmap": true, 00:10:11.326 "flush": true, 00:10:11.326 "reset": true, 00:10:11.326 "nvme_admin": false, 00:10:11.326 "nvme_io": false, 00:10:11.326 "nvme_io_md": false, 00:10:11.326 "write_zeroes": true, 00:10:11.326 "zcopy": true, 00:10:11.326 "get_zone_info": false, 00:10:11.326 "zone_management": false, 00:10:11.326 "zone_append": false, 00:10:11.326 "compare": false, 00:10:11.326 "compare_and_write": false, 00:10:11.326 "abort": true, 00:10:11.326 "seek_hole": false, 00:10:11.326 "seek_data": false, 00:10:11.326 "copy": true, 00:10:11.326 "nvme_iov_md": false 00:10:11.326 }, 00:10:11.326 "memory_domains": [ 00:10:11.326 { 00:10:11.326 "dma_device_id": "system", 00:10:11.326 "dma_device_type": 1 00:10:11.326 }, 00:10:11.326 { 00:10:11.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.326 "dma_device_type": 2 00:10:11.326 } 00:10:11.326 ], 00:10:11.326 "driver_specific": {} 00:10:11.326 } 00:10:11.326 ] 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.326 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.327 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.327 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.327 "name": "Existed_Raid", 00:10:11.327 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:11.327 "strip_size_kb": 0, 00:10:11.327 "state": "configuring", 00:10:11.327 "raid_level": "raid1", 00:10:11.327 "superblock": true, 00:10:11.327 "num_base_bdevs": 3, 00:10:11.327 "num_base_bdevs_discovered": 2, 00:10:11.327 "num_base_bdevs_operational": 3, 00:10:11.327 "base_bdevs_list": [ 00:10:11.327 { 00:10:11.327 "name": "BaseBdev1", 00:10:11.327 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:11.327 "is_configured": true, 00:10:11.327 "data_offset": 2048, 00:10:11.327 "data_size": 63488 00:10:11.327 }, 00:10:11.327 { 00:10:11.327 "name": null, 00:10:11.327 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:11.327 "is_configured": false, 00:10:11.327 "data_offset": 0, 00:10:11.327 "data_size": 63488 00:10:11.327 }, 00:10:11.327 { 00:10:11.327 "name": "BaseBdev3", 00:10:11.327 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:11.327 "is_configured": true, 00:10:11.327 "data_offset": 2048, 00:10:11.327 "data_size": 63488 00:10:11.327 } 00:10:11.327 ] 00:10:11.327 }' 00:10:11.327 05:48:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.327 05:48:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 [2024-12-12 05:48:19.036661] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.587 "name": "Existed_Raid", 00:10:11.587 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:11.587 "strip_size_kb": 0, 00:10:11.587 "state": "configuring", 00:10:11.587 "raid_level": "raid1", 00:10:11.587 "superblock": true, 00:10:11.587 "num_base_bdevs": 3, 00:10:11.587 "num_base_bdevs_discovered": 1, 00:10:11.587 "num_base_bdevs_operational": 3, 00:10:11.587 "base_bdevs_list": [ 00:10:11.587 { 00:10:11.587 "name": "BaseBdev1", 00:10:11.587 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:11.587 "is_configured": true, 00:10:11.587 "data_offset": 2048, 00:10:11.587 "data_size": 63488 00:10:11.587 }, 00:10:11.587 { 00:10:11.587 "name": null, 00:10:11.587 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:11.587 "is_configured": false, 00:10:11.587 "data_offset": 0, 00:10:11.587 "data_size": 63488 00:10:11.587 }, 00:10:11.587 { 00:10:11.587 "name": null, 00:10:11.587 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:11.587 "is_configured": false, 00:10:11.587 "data_offset": 0, 00:10:11.587 "data_size": 63488 00:10:11.587 } 00:10:11.587 ] 00:10:11.587 }' 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.587 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.157 [2024-12-12 05:48:19.507919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.157 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.158 "name": "Existed_Raid", 00:10:12.158 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:12.158 "strip_size_kb": 0, 00:10:12.158 "state": "configuring", 00:10:12.158 "raid_level": "raid1", 00:10:12.158 "superblock": true, 00:10:12.158 "num_base_bdevs": 3, 00:10:12.158 "num_base_bdevs_discovered": 2, 00:10:12.158 "num_base_bdevs_operational": 3, 00:10:12.158 "base_bdevs_list": [ 00:10:12.158 { 00:10:12.158 "name": "BaseBdev1", 00:10:12.158 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:12.158 "is_configured": true, 00:10:12.158 "data_offset": 2048, 00:10:12.158 "data_size": 63488 00:10:12.158 }, 00:10:12.158 { 00:10:12.158 "name": null, 00:10:12.158 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:12.158 "is_configured": false, 00:10:12.158 "data_offset": 0, 00:10:12.158 "data_size": 63488 00:10:12.158 }, 00:10:12.158 { 00:10:12.158 "name": "BaseBdev3", 00:10:12.158 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:12.158 "is_configured": true, 00:10:12.158 "data_offset": 2048, 00:10:12.158 "data_size": 63488 00:10:12.158 } 00:10:12.158 ] 00:10:12.158 }' 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.158 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.727 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:12.727 05:48:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.727 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.727 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.727 05:48:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.727 [2024-12-12 05:48:20.023063] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.727 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.727 "name": "Existed_Raid", 00:10:12.727 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:12.727 "strip_size_kb": 0, 00:10:12.727 "state": "configuring", 00:10:12.727 "raid_level": "raid1", 00:10:12.727 "superblock": true, 00:10:12.728 "num_base_bdevs": 3, 00:10:12.728 "num_base_bdevs_discovered": 1, 00:10:12.728 "num_base_bdevs_operational": 3, 00:10:12.728 "base_bdevs_list": [ 00:10:12.728 { 00:10:12.728 "name": null, 00:10:12.728 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:12.728 "is_configured": false, 00:10:12.728 "data_offset": 0, 00:10:12.728 "data_size": 63488 00:10:12.728 }, 00:10:12.728 { 00:10:12.728 "name": null, 00:10:12.728 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:12.728 "is_configured": false, 00:10:12.728 "data_offset": 0, 00:10:12.728 "data_size": 63488 00:10:12.728 }, 00:10:12.728 { 00:10:12.728 "name": "BaseBdev3", 00:10:12.728 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:12.728 "is_configured": true, 00:10:12.728 "data_offset": 2048, 00:10:12.728 "data_size": 63488 00:10:12.728 } 00:10:12.728 ] 00:10:12.728 }' 00:10:12.728 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.728 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.297 [2024-12-12 05:48:20.582032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.297 "name": "Existed_Raid", 00:10:13.297 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:13.297 "strip_size_kb": 0, 00:10:13.297 "state": "configuring", 00:10:13.297 "raid_level": "raid1", 00:10:13.297 "superblock": true, 00:10:13.297 "num_base_bdevs": 3, 00:10:13.297 "num_base_bdevs_discovered": 2, 00:10:13.297 "num_base_bdevs_operational": 3, 00:10:13.297 "base_bdevs_list": [ 00:10:13.297 { 00:10:13.297 "name": null, 00:10:13.297 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:13.297 "is_configured": false, 00:10:13.297 "data_offset": 0, 00:10:13.297 "data_size": 63488 00:10:13.297 }, 00:10:13.297 { 00:10:13.297 "name": "BaseBdev2", 00:10:13.297 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:13.297 "is_configured": true, 00:10:13.297 "data_offset": 2048, 00:10:13.297 "data_size": 63488 00:10:13.297 }, 00:10:13.297 { 00:10:13.297 "name": "BaseBdev3", 00:10:13.297 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:13.297 "is_configured": true, 00:10:13.297 "data_offset": 2048, 00:10:13.297 "data_size": 63488 00:10:13.297 } 00:10:13.297 ] 00:10:13.297 }' 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.297 05:48:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.557 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:13.558 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:13.817 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.817 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.817 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.817 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.817 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a8738d36-b75a-4aa4-8c0d-a5d04d15026b 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 [2024-12-12 05:48:21.161504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:13.818 [2024-12-12 05:48:21.161837] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:13.818 [2024-12-12 05:48:21.161890] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:13.818 [2024-12-12 05:48:21.162200] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:13.818 NewBaseBdev 00:10:13.818 [2024-12-12 05:48:21.162470] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:13.818 [2024-12-12 05:48:21.162563] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:13.818 [2024-12-12 05:48:21.162843] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 [ 00:10:13.818 { 00:10:13.818 "name": "NewBaseBdev", 00:10:13.818 "aliases": [ 00:10:13.818 "a8738d36-b75a-4aa4-8c0d-a5d04d15026b" 00:10:13.818 ], 00:10:13.818 "product_name": "Malloc disk", 00:10:13.818 "block_size": 512, 00:10:13.818 "num_blocks": 65536, 00:10:13.818 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:13.818 "assigned_rate_limits": { 00:10:13.818 "rw_ios_per_sec": 0, 00:10:13.818 "rw_mbytes_per_sec": 0, 00:10:13.818 "r_mbytes_per_sec": 0, 00:10:13.818 "w_mbytes_per_sec": 0 00:10:13.818 }, 00:10:13.818 "claimed": true, 00:10:13.818 "claim_type": "exclusive_write", 00:10:13.818 "zoned": false, 00:10:13.818 "supported_io_types": { 00:10:13.818 "read": true, 00:10:13.818 "write": true, 00:10:13.818 "unmap": true, 00:10:13.818 "flush": true, 00:10:13.818 "reset": true, 00:10:13.818 "nvme_admin": false, 00:10:13.818 "nvme_io": false, 00:10:13.818 "nvme_io_md": false, 00:10:13.818 "write_zeroes": true, 00:10:13.818 "zcopy": true, 00:10:13.818 "get_zone_info": false, 00:10:13.818 "zone_management": false, 00:10:13.818 "zone_append": false, 00:10:13.818 "compare": false, 00:10:13.818 "compare_and_write": false, 00:10:13.818 "abort": true, 00:10:13.818 "seek_hole": false, 00:10:13.818 "seek_data": false, 00:10:13.818 "copy": true, 00:10:13.818 "nvme_iov_md": false 00:10:13.818 }, 00:10:13.818 "memory_domains": [ 00:10:13.818 { 00:10:13.818 "dma_device_id": "system", 00:10:13.818 "dma_device_type": 1 00:10:13.818 }, 00:10:13.818 { 00:10:13.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.818 "dma_device_type": 2 00:10:13.818 } 00:10:13.818 ], 00:10:13.818 "driver_specific": {} 00:10:13.818 } 00:10:13.818 ] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.818 "name": "Existed_Raid", 00:10:13.818 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:13.818 "strip_size_kb": 0, 00:10:13.818 "state": "online", 00:10:13.818 "raid_level": "raid1", 00:10:13.818 "superblock": true, 00:10:13.818 "num_base_bdevs": 3, 00:10:13.818 "num_base_bdevs_discovered": 3, 00:10:13.818 "num_base_bdevs_operational": 3, 00:10:13.818 "base_bdevs_list": [ 00:10:13.818 { 00:10:13.818 "name": "NewBaseBdev", 00:10:13.818 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:13.818 "is_configured": true, 00:10:13.818 "data_offset": 2048, 00:10:13.818 "data_size": 63488 00:10:13.818 }, 00:10:13.818 { 00:10:13.818 "name": "BaseBdev2", 00:10:13.818 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:13.818 "is_configured": true, 00:10:13.818 "data_offset": 2048, 00:10:13.818 "data_size": 63488 00:10:13.818 }, 00:10:13.818 { 00:10:13.818 "name": "BaseBdev3", 00:10:13.818 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:13.818 "is_configured": true, 00:10:13.818 "data_offset": 2048, 00:10:13.818 "data_size": 63488 00:10:13.818 } 00:10:13.818 ] 00:10:13.818 }' 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.818 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.393 [2024-12-12 05:48:21.621037] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.393 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:14.393 "name": "Existed_Raid", 00:10:14.393 "aliases": [ 00:10:14.393 "e6992d03-1191-476a-914c-44c0577f93ed" 00:10:14.393 ], 00:10:14.393 "product_name": "Raid Volume", 00:10:14.393 "block_size": 512, 00:10:14.393 "num_blocks": 63488, 00:10:14.393 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:14.393 "assigned_rate_limits": { 00:10:14.393 "rw_ios_per_sec": 0, 00:10:14.393 "rw_mbytes_per_sec": 0, 00:10:14.393 "r_mbytes_per_sec": 0, 00:10:14.393 "w_mbytes_per_sec": 0 00:10:14.393 }, 00:10:14.393 "claimed": false, 00:10:14.393 "zoned": false, 00:10:14.393 "supported_io_types": { 00:10:14.393 "read": true, 00:10:14.393 "write": true, 00:10:14.393 "unmap": false, 00:10:14.393 "flush": false, 00:10:14.393 "reset": true, 00:10:14.393 "nvme_admin": false, 00:10:14.393 "nvme_io": false, 00:10:14.393 "nvme_io_md": false, 00:10:14.393 "write_zeroes": true, 00:10:14.393 "zcopy": false, 00:10:14.393 "get_zone_info": false, 00:10:14.393 "zone_management": false, 00:10:14.393 "zone_append": false, 00:10:14.393 "compare": false, 00:10:14.393 "compare_and_write": false, 00:10:14.393 "abort": false, 00:10:14.393 "seek_hole": false, 00:10:14.393 "seek_data": false, 00:10:14.393 "copy": false, 00:10:14.393 "nvme_iov_md": false 00:10:14.393 }, 00:10:14.393 "memory_domains": [ 00:10:14.393 { 00:10:14.393 "dma_device_id": "system", 00:10:14.393 "dma_device_type": 1 00:10:14.393 }, 00:10:14.393 { 00:10:14.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.393 "dma_device_type": 2 00:10:14.393 }, 00:10:14.393 { 00:10:14.393 "dma_device_id": "system", 00:10:14.393 "dma_device_type": 1 00:10:14.393 }, 00:10:14.393 { 00:10:14.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.393 "dma_device_type": 2 00:10:14.393 }, 00:10:14.393 { 00:10:14.393 "dma_device_id": "system", 00:10:14.393 "dma_device_type": 1 00:10:14.393 }, 00:10:14.393 { 00:10:14.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.393 "dma_device_type": 2 00:10:14.393 } 00:10:14.393 ], 00:10:14.393 "driver_specific": { 00:10:14.393 "raid": { 00:10:14.393 "uuid": "e6992d03-1191-476a-914c-44c0577f93ed", 00:10:14.393 "strip_size_kb": 0, 00:10:14.393 "state": "online", 00:10:14.394 "raid_level": "raid1", 00:10:14.394 "superblock": true, 00:10:14.394 "num_base_bdevs": 3, 00:10:14.394 "num_base_bdevs_discovered": 3, 00:10:14.394 "num_base_bdevs_operational": 3, 00:10:14.394 "base_bdevs_list": [ 00:10:14.394 { 00:10:14.394 "name": "NewBaseBdev", 00:10:14.394 "uuid": "a8738d36-b75a-4aa4-8c0d-a5d04d15026b", 00:10:14.394 "is_configured": true, 00:10:14.394 "data_offset": 2048, 00:10:14.394 "data_size": 63488 00:10:14.394 }, 00:10:14.394 { 00:10:14.394 "name": "BaseBdev2", 00:10:14.394 "uuid": "3964104b-5860-40aa-9eeb-fb36d7569eb1", 00:10:14.394 "is_configured": true, 00:10:14.394 "data_offset": 2048, 00:10:14.394 "data_size": 63488 00:10:14.394 }, 00:10:14.394 { 00:10:14.394 "name": "BaseBdev3", 00:10:14.394 "uuid": "66a9629d-2fc5-4e5c-99fb-fe7690a5d44d", 00:10:14.394 "is_configured": true, 00:10:14.394 "data_offset": 2048, 00:10:14.394 "data_size": 63488 00:10:14.394 } 00:10:14.394 ] 00:10:14.394 } 00:10:14.394 } 00:10:14.394 }' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:14.394 BaseBdev2 00:10:14.394 BaseBdev3' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.394 [2024-12-12 05:48:21.852387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:14.394 [2024-12-12 05:48:21.852464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:14.394 [2024-12-12 05:48:21.852570] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:14.394 [2024-12-12 05:48:21.852955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:14.394 [2024-12-12 05:48:21.853010] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68958 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 68958 ']' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 68958 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68958 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.394 killing process with pid 68958 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68958' 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 68958 00:10:14.394 05:48:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 68958 00:10:14.394 [2024-12-12 05:48:21.894471] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:14.962 [2024-12-12 05:48:22.188538] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.899 ************************************ 00:10:15.899 END TEST raid_state_function_test_sb 00:10:15.899 ************************************ 00:10:15.899 05:48:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:15.899 00:10:15.899 real 0m10.100s 00:10:15.900 user 0m16.075s 00:10:15.900 sys 0m1.664s 00:10:15.900 05:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.900 05:48:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.900 05:48:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:15.900 05:48:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:15.900 05:48:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.900 05:48:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.900 ************************************ 00:10:15.900 START TEST raid_superblock_test 00:10:15.900 ************************************ 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=69574 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 69574 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 69574 ']' 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.900 05:48:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.159 [2024-12-12 05:48:23.431991] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:16.159 [2024-12-12 05:48:23.432180] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69574 ] 00:10:16.159 [2024-12-12 05:48:23.606084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.418 [2024-12-12 05:48:23.714929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.418 [2024-12-12 05:48:23.912461] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.418 [2024-12-12 05:48:23.912612] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 malloc1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 [2024-12-12 05:48:24.315038] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:16.986 [2024-12-12 05:48:24.315154] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.986 [2024-12-12 05:48:24.315196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:16.986 [2024-12-12 05:48:24.315225] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.986 [2024-12-12 05:48:24.317390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.986 [2024-12-12 05:48:24.317461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:16.986 pt1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 malloc2 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 [2024-12-12 05:48:24.374334] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:16.986 [2024-12-12 05:48:24.374407] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.986 [2024-12-12 05:48:24.374430] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:16.986 [2024-12-12 05:48:24.374439] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.986 [2024-12-12 05:48:24.376567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.986 [2024-12-12 05:48:24.376648] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:16.986 pt2 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 malloc3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 [2024-12-12 05:48:24.437755] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:16.986 [2024-12-12 05:48:24.437854] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.986 [2024-12-12 05:48:24.437892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:16.986 [2024-12-12 05:48:24.437919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.986 [2024-12-12 05:48:24.440059] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.986 [2024-12-12 05:48:24.440128] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:16.986 pt3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 [2024-12-12 05:48:24.449774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:16.986 [2024-12-12 05:48:24.451616] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:16.986 [2024-12-12 05:48:24.451735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:16.986 [2024-12-12 05:48:24.451935] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:16.986 [2024-12-12 05:48:24.451992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:16.986 [2024-12-12 05:48:24.452258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.986 [2024-12-12 05:48:24.452480] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:16.986 [2024-12-12 05:48:24.452542] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:16.986 [2024-12-12 05:48:24.452743] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.986 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.986 "name": "raid_bdev1", 00:10:16.986 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:16.986 "strip_size_kb": 0, 00:10:16.986 "state": "online", 00:10:16.986 "raid_level": "raid1", 00:10:16.986 "superblock": true, 00:10:16.986 "num_base_bdevs": 3, 00:10:16.986 "num_base_bdevs_discovered": 3, 00:10:16.986 "num_base_bdevs_operational": 3, 00:10:16.986 "base_bdevs_list": [ 00:10:16.986 { 00:10:16.986 "name": "pt1", 00:10:16.986 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:16.986 "is_configured": true, 00:10:16.986 "data_offset": 2048, 00:10:16.986 "data_size": 63488 00:10:16.986 }, 00:10:16.986 { 00:10:16.986 "name": "pt2", 00:10:16.987 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:16.987 "is_configured": true, 00:10:16.987 "data_offset": 2048, 00:10:16.987 "data_size": 63488 00:10:16.987 }, 00:10:16.987 { 00:10:16.987 "name": "pt3", 00:10:16.987 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:16.987 "is_configured": true, 00:10:16.987 "data_offset": 2048, 00:10:16.987 "data_size": 63488 00:10:16.987 } 00:10:16.987 ] 00:10:16.987 }' 00:10:16.987 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.987 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.553 [2024-12-12 05:48:24.889271] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:17.553 "name": "raid_bdev1", 00:10:17.553 "aliases": [ 00:10:17.553 "7a642d4e-d02b-4180-af96-1cbe91e63769" 00:10:17.553 ], 00:10:17.553 "product_name": "Raid Volume", 00:10:17.553 "block_size": 512, 00:10:17.553 "num_blocks": 63488, 00:10:17.553 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:17.553 "assigned_rate_limits": { 00:10:17.553 "rw_ios_per_sec": 0, 00:10:17.553 "rw_mbytes_per_sec": 0, 00:10:17.553 "r_mbytes_per_sec": 0, 00:10:17.553 "w_mbytes_per_sec": 0 00:10:17.553 }, 00:10:17.553 "claimed": false, 00:10:17.553 "zoned": false, 00:10:17.553 "supported_io_types": { 00:10:17.553 "read": true, 00:10:17.553 "write": true, 00:10:17.553 "unmap": false, 00:10:17.553 "flush": false, 00:10:17.553 "reset": true, 00:10:17.553 "nvme_admin": false, 00:10:17.553 "nvme_io": false, 00:10:17.553 "nvme_io_md": false, 00:10:17.553 "write_zeroes": true, 00:10:17.553 "zcopy": false, 00:10:17.553 "get_zone_info": false, 00:10:17.553 "zone_management": false, 00:10:17.553 "zone_append": false, 00:10:17.553 "compare": false, 00:10:17.553 "compare_and_write": false, 00:10:17.553 "abort": false, 00:10:17.553 "seek_hole": false, 00:10:17.553 "seek_data": false, 00:10:17.553 "copy": false, 00:10:17.553 "nvme_iov_md": false 00:10:17.553 }, 00:10:17.553 "memory_domains": [ 00:10:17.553 { 00:10:17.553 "dma_device_id": "system", 00:10:17.553 "dma_device_type": 1 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.553 "dma_device_type": 2 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "dma_device_id": "system", 00:10:17.553 "dma_device_type": 1 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.553 "dma_device_type": 2 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "dma_device_id": "system", 00:10:17.553 "dma_device_type": 1 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.553 "dma_device_type": 2 00:10:17.553 } 00:10:17.553 ], 00:10:17.553 "driver_specific": { 00:10:17.553 "raid": { 00:10:17.553 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:17.553 "strip_size_kb": 0, 00:10:17.553 "state": "online", 00:10:17.553 "raid_level": "raid1", 00:10:17.553 "superblock": true, 00:10:17.553 "num_base_bdevs": 3, 00:10:17.553 "num_base_bdevs_discovered": 3, 00:10:17.553 "num_base_bdevs_operational": 3, 00:10:17.553 "base_bdevs_list": [ 00:10:17.553 { 00:10:17.553 "name": "pt1", 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:17.553 "is_configured": true, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "name": "pt2", 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:17.553 "is_configured": true, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 }, 00:10:17.553 { 00:10:17.553 "name": "pt3", 00:10:17.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:17.553 "is_configured": true, 00:10:17.553 "data_offset": 2048, 00:10:17.553 "data_size": 63488 00:10:17.553 } 00:10:17.553 ] 00:10:17.553 } 00:10:17.553 } 00:10:17.553 }' 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:17.553 pt2 00:10:17.553 pt3' 00:10:17.553 05:48:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.553 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 [2024-12-12 05:48:25.180797] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7a642d4e-d02b-4180-af96-1cbe91e63769 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7a642d4e-d02b-4180-af96-1cbe91e63769 ']' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 [2024-12-12 05:48:25.228444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.812 [2024-12-12 05:48:25.228531] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.812 [2024-12-12 05:48:25.228631] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.812 [2024-12-12 05:48:25.228741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.812 [2024-12-12 05:48:25.228786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:17.812 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.070 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.070 [2024-12-12 05:48:25.364262] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:18.070 [2024-12-12 05:48:25.366105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:18.070 [2024-12-12 05:48:25.366162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:18.070 [2024-12-12 05:48:25.366216] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:18.070 [2024-12-12 05:48:25.366264] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:18.070 [2024-12-12 05:48:25.366283] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:18.070 [2024-12-12 05:48:25.366298] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.070 [2024-12-12 05:48:25.366308] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:18.070 request: 00:10:18.070 { 00:10:18.070 "name": "raid_bdev1", 00:10:18.070 "raid_level": "raid1", 00:10:18.070 "base_bdevs": [ 00:10:18.070 "malloc1", 00:10:18.070 "malloc2", 00:10:18.070 "malloc3" 00:10:18.070 ], 00:10:18.070 "superblock": false, 00:10:18.070 "method": "bdev_raid_create", 00:10:18.070 "req_id": 1 00:10:18.070 } 00:10:18.070 Got JSON-RPC error response 00:10:18.070 response: 00:10:18.070 { 00:10:18.070 "code": -17, 00:10:18.070 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:18.071 } 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.071 [2024-12-12 05:48:25.428101] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.071 [2024-12-12 05:48:25.428148] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.071 [2024-12-12 05:48:25.428166] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:18.071 [2024-12-12 05:48:25.428175] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.071 [2024-12-12 05:48:25.430391] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.071 [2024-12-12 05:48:25.430475] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.071 [2024-12-12 05:48:25.430562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:18.071 [2024-12-12 05:48:25.430632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.071 pt1 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.071 "name": "raid_bdev1", 00:10:18.071 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:18.071 "strip_size_kb": 0, 00:10:18.071 "state": "configuring", 00:10:18.071 "raid_level": "raid1", 00:10:18.071 "superblock": true, 00:10:18.071 "num_base_bdevs": 3, 00:10:18.071 "num_base_bdevs_discovered": 1, 00:10:18.071 "num_base_bdevs_operational": 3, 00:10:18.071 "base_bdevs_list": [ 00:10:18.071 { 00:10:18.071 "name": "pt1", 00:10:18.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.071 "is_configured": true, 00:10:18.071 "data_offset": 2048, 00:10:18.071 "data_size": 63488 00:10:18.071 }, 00:10:18.071 { 00:10:18.071 "name": null, 00:10:18.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.071 "is_configured": false, 00:10:18.071 "data_offset": 2048, 00:10:18.071 "data_size": 63488 00:10:18.071 }, 00:10:18.071 { 00:10:18.071 "name": null, 00:10:18.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.071 "is_configured": false, 00:10:18.071 "data_offset": 2048, 00:10:18.071 "data_size": 63488 00:10:18.071 } 00:10:18.071 ] 00:10:18.071 }' 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.071 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.330 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:18.330 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.330 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.330 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.330 [2024-12-12 05:48:25.847456] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.330 [2024-12-12 05:48:25.847525] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.330 [2024-12-12 05:48:25.847549] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:18.330 [2024-12-12 05:48:25.847558] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.330 [2024-12-12 05:48:25.848032] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.330 [2024-12-12 05:48:25.848057] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.330 [2024-12-12 05:48:25.848139] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.330 [2024-12-12 05:48:25.848160] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.589 pt2 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 [2024-12-12 05:48:25.859437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.589 "name": "raid_bdev1", 00:10:18.589 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:18.589 "strip_size_kb": 0, 00:10:18.589 "state": "configuring", 00:10:18.589 "raid_level": "raid1", 00:10:18.589 "superblock": true, 00:10:18.589 "num_base_bdevs": 3, 00:10:18.589 "num_base_bdevs_discovered": 1, 00:10:18.589 "num_base_bdevs_operational": 3, 00:10:18.589 "base_bdevs_list": [ 00:10:18.589 { 00:10:18.589 "name": "pt1", 00:10:18.589 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.589 "is_configured": true, 00:10:18.589 "data_offset": 2048, 00:10:18.589 "data_size": 63488 00:10:18.589 }, 00:10:18.589 { 00:10:18.589 "name": null, 00:10:18.589 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.589 "is_configured": false, 00:10:18.589 "data_offset": 0, 00:10:18.589 "data_size": 63488 00:10:18.589 }, 00:10:18.589 { 00:10:18.589 "name": null, 00:10:18.589 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.589 "is_configured": false, 00:10:18.589 "data_offset": 2048, 00:10:18.589 "data_size": 63488 00:10:18.589 } 00:10:18.589 ] 00:10:18.589 }' 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.589 05:48:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 [2024-12-12 05:48:26.334644] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.849 [2024-12-12 05:48:26.334776] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.849 [2024-12-12 05:48:26.334818] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:18.849 [2024-12-12 05:48:26.334851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.849 [2024-12-12 05:48:26.335403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.849 [2024-12-12 05:48:26.335488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.849 [2024-12-12 05:48:26.335646] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:18.849 [2024-12-12 05:48:26.335729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.849 pt2 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.849 [2024-12-12 05:48:26.346638] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.849 [2024-12-12 05:48:26.346728] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.849 [2024-12-12 05:48:26.346761] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:18.849 [2024-12-12 05:48:26.346791] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.849 [2024-12-12 05:48:26.347193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.849 [2024-12-12 05:48:26.347254] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.849 [2024-12-12 05:48:26.347364] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:18.849 [2024-12-12 05:48:26.347418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.849 [2024-12-12 05:48:26.347621] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:18.849 [2024-12-12 05:48:26.347668] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:18.849 [2024-12-12 05:48:26.347948] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:18.849 [2024-12-12 05:48:26.348166] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:18.849 [2024-12-12 05:48:26.348205] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:18.849 [2024-12-12 05:48:26.348406] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.849 pt3 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.849 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.850 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.112 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.112 "name": "raid_bdev1", 00:10:19.112 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:19.112 "strip_size_kb": 0, 00:10:19.112 "state": "online", 00:10:19.112 "raid_level": "raid1", 00:10:19.112 "superblock": true, 00:10:19.112 "num_base_bdevs": 3, 00:10:19.112 "num_base_bdevs_discovered": 3, 00:10:19.112 "num_base_bdevs_operational": 3, 00:10:19.112 "base_bdevs_list": [ 00:10:19.112 { 00:10:19.112 "name": "pt1", 00:10:19.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.112 "is_configured": true, 00:10:19.112 "data_offset": 2048, 00:10:19.112 "data_size": 63488 00:10:19.112 }, 00:10:19.112 { 00:10:19.112 "name": "pt2", 00:10:19.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.112 "is_configured": true, 00:10:19.112 "data_offset": 2048, 00:10:19.112 "data_size": 63488 00:10:19.112 }, 00:10:19.112 { 00:10:19.112 "name": "pt3", 00:10:19.112 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.112 "is_configured": true, 00:10:19.112 "data_offset": 2048, 00:10:19.112 "data_size": 63488 00:10:19.112 } 00:10:19.112 ] 00:10:19.112 }' 00:10:19.112 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.112 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.373 [2024-12-12 05:48:26.750248] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.373 "name": "raid_bdev1", 00:10:19.373 "aliases": [ 00:10:19.373 "7a642d4e-d02b-4180-af96-1cbe91e63769" 00:10:19.373 ], 00:10:19.373 "product_name": "Raid Volume", 00:10:19.373 "block_size": 512, 00:10:19.373 "num_blocks": 63488, 00:10:19.373 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:19.373 "assigned_rate_limits": { 00:10:19.373 "rw_ios_per_sec": 0, 00:10:19.373 "rw_mbytes_per_sec": 0, 00:10:19.373 "r_mbytes_per_sec": 0, 00:10:19.373 "w_mbytes_per_sec": 0 00:10:19.373 }, 00:10:19.373 "claimed": false, 00:10:19.373 "zoned": false, 00:10:19.373 "supported_io_types": { 00:10:19.373 "read": true, 00:10:19.373 "write": true, 00:10:19.373 "unmap": false, 00:10:19.373 "flush": false, 00:10:19.373 "reset": true, 00:10:19.373 "nvme_admin": false, 00:10:19.373 "nvme_io": false, 00:10:19.373 "nvme_io_md": false, 00:10:19.373 "write_zeroes": true, 00:10:19.373 "zcopy": false, 00:10:19.373 "get_zone_info": false, 00:10:19.373 "zone_management": false, 00:10:19.373 "zone_append": false, 00:10:19.373 "compare": false, 00:10:19.373 "compare_and_write": false, 00:10:19.373 "abort": false, 00:10:19.373 "seek_hole": false, 00:10:19.373 "seek_data": false, 00:10:19.373 "copy": false, 00:10:19.373 "nvme_iov_md": false 00:10:19.373 }, 00:10:19.373 "memory_domains": [ 00:10:19.373 { 00:10:19.373 "dma_device_id": "system", 00:10:19.373 "dma_device_type": 1 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.373 "dma_device_type": 2 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "dma_device_id": "system", 00:10:19.373 "dma_device_type": 1 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.373 "dma_device_type": 2 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "dma_device_id": "system", 00:10:19.373 "dma_device_type": 1 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.373 "dma_device_type": 2 00:10:19.373 } 00:10:19.373 ], 00:10:19.373 "driver_specific": { 00:10:19.373 "raid": { 00:10:19.373 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:19.373 "strip_size_kb": 0, 00:10:19.373 "state": "online", 00:10:19.373 "raid_level": "raid1", 00:10:19.373 "superblock": true, 00:10:19.373 "num_base_bdevs": 3, 00:10:19.373 "num_base_bdevs_discovered": 3, 00:10:19.373 "num_base_bdevs_operational": 3, 00:10:19.373 "base_bdevs_list": [ 00:10:19.373 { 00:10:19.373 "name": "pt1", 00:10:19.373 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.373 "is_configured": true, 00:10:19.373 "data_offset": 2048, 00:10:19.373 "data_size": 63488 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "name": "pt2", 00:10:19.373 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.373 "is_configured": true, 00:10:19.373 "data_offset": 2048, 00:10:19.373 "data_size": 63488 00:10:19.373 }, 00:10:19.373 { 00:10:19.373 "name": "pt3", 00:10:19.373 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.373 "is_configured": true, 00:10:19.373 "data_offset": 2048, 00:10:19.373 "data_size": 63488 00:10:19.373 } 00:10:19.373 ] 00:10:19.373 } 00:10:19.373 } 00:10:19.373 }' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:19.373 pt2 00:10:19.373 pt3' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.373 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.633 05:48:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:19.633 [2024-12-12 05:48:27.009773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7a642d4e-d02b-4180-af96-1cbe91e63769 '!=' 7a642d4e-d02b-4180-af96-1cbe91e63769 ']' 00:10:19.633 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.634 [2024-12-12 05:48:27.057463] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.634 "name": "raid_bdev1", 00:10:19.634 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:19.634 "strip_size_kb": 0, 00:10:19.634 "state": "online", 00:10:19.634 "raid_level": "raid1", 00:10:19.634 "superblock": true, 00:10:19.634 "num_base_bdevs": 3, 00:10:19.634 "num_base_bdevs_discovered": 2, 00:10:19.634 "num_base_bdevs_operational": 2, 00:10:19.634 "base_bdevs_list": [ 00:10:19.634 { 00:10:19.634 "name": null, 00:10:19.634 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:19.634 "is_configured": false, 00:10:19.634 "data_offset": 0, 00:10:19.634 "data_size": 63488 00:10:19.634 }, 00:10:19.634 { 00:10:19.634 "name": "pt2", 00:10:19.634 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.634 "is_configured": true, 00:10:19.634 "data_offset": 2048, 00:10:19.634 "data_size": 63488 00:10:19.634 }, 00:10:19.634 { 00:10:19.634 "name": "pt3", 00:10:19.634 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.634 "is_configured": true, 00:10:19.634 "data_offset": 2048, 00:10:19.634 "data_size": 63488 00:10:19.634 } 00:10:19.634 ] 00:10:19.634 }' 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.634 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 [2024-12-12 05:48:27.488699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:20.203 [2024-12-12 05:48:27.488791] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:20.203 [2024-12-12 05:48:27.488894] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:20.203 [2024-12-12 05:48:27.489020] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:20.203 [2024-12-12 05:48:27.489081] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 [2024-12-12 05:48:27.572520] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.203 [2024-12-12 05:48:27.572574] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.203 [2024-12-12 05:48:27.572591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:20.203 [2024-12-12 05:48:27.572601] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.203 [2024-12-12 05:48:27.574762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.203 [2024-12-12 05:48:27.574804] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.203 [2024-12-12 05:48:27.574880] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.203 [2024-12-12 05:48:27.574925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.203 pt2 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.203 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.203 "name": "raid_bdev1", 00:10:20.203 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:20.203 "strip_size_kb": 0, 00:10:20.204 "state": "configuring", 00:10:20.204 "raid_level": "raid1", 00:10:20.204 "superblock": true, 00:10:20.204 "num_base_bdevs": 3, 00:10:20.204 "num_base_bdevs_discovered": 1, 00:10:20.204 "num_base_bdevs_operational": 2, 00:10:20.204 "base_bdevs_list": [ 00:10:20.204 { 00:10:20.204 "name": null, 00:10:20.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.204 "is_configured": false, 00:10:20.204 "data_offset": 2048, 00:10:20.204 "data_size": 63488 00:10:20.204 }, 00:10:20.204 { 00:10:20.204 "name": "pt2", 00:10:20.204 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.204 "is_configured": true, 00:10:20.204 "data_offset": 2048, 00:10:20.204 "data_size": 63488 00:10:20.204 }, 00:10:20.204 { 00:10:20.204 "name": null, 00:10:20.204 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.204 "is_configured": false, 00:10:20.204 "data_offset": 2048, 00:10:20.204 "data_size": 63488 00:10:20.204 } 00:10:20.204 ] 00:10:20.204 }' 00:10:20.204 05:48:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.204 05:48:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.777 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.777 [2024-12-12 05:48:28.023770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.777 [2024-12-12 05:48:28.023893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.777 [2024-12-12 05:48:28.023932] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:20.777 [2024-12-12 05:48:28.023962] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.777 [2024-12-12 05:48:28.024483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.777 [2024-12-12 05:48:28.024574] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.777 [2024-12-12 05:48:28.024729] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.777 [2024-12-12 05:48:28.024807] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.777 [2024-12-12 05:48:28.024972] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:20.778 [2024-12-12 05:48:28.025012] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:20.778 [2024-12-12 05:48:28.025308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:20.778 [2024-12-12 05:48:28.025518] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:20.778 [2024-12-12 05:48:28.025562] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:20.778 [2024-12-12 05:48:28.025767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.778 pt3 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.778 "name": "raid_bdev1", 00:10:20.778 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:20.778 "strip_size_kb": 0, 00:10:20.778 "state": "online", 00:10:20.778 "raid_level": "raid1", 00:10:20.778 "superblock": true, 00:10:20.778 "num_base_bdevs": 3, 00:10:20.778 "num_base_bdevs_discovered": 2, 00:10:20.778 "num_base_bdevs_operational": 2, 00:10:20.778 "base_bdevs_list": [ 00:10:20.778 { 00:10:20.778 "name": null, 00:10:20.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:20.778 "is_configured": false, 00:10:20.778 "data_offset": 2048, 00:10:20.778 "data_size": 63488 00:10:20.778 }, 00:10:20.778 { 00:10:20.778 "name": "pt2", 00:10:20.778 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.778 "is_configured": true, 00:10:20.778 "data_offset": 2048, 00:10:20.778 "data_size": 63488 00:10:20.778 }, 00:10:20.778 { 00:10:20.778 "name": "pt3", 00:10:20.778 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.778 "is_configured": true, 00:10:20.778 "data_offset": 2048, 00:10:20.778 "data_size": 63488 00:10:20.778 } 00:10:20.778 ] 00:10:20.778 }' 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.778 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.070 [2024-12-12 05:48:28.447008] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.070 [2024-12-12 05:48:28.447040] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:21.070 [2024-12-12 05:48:28.447123] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.070 [2024-12-12 05:48:28.447189] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.070 [2024-12-12 05:48:28.447198] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.070 [2024-12-12 05:48:28.518885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:21.070 [2024-12-12 05:48:28.518939] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.070 [2024-12-12 05:48:28.518958] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:21.070 [2024-12-12 05:48:28.518967] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.070 [2024-12-12 05:48:28.521066] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.070 [2024-12-12 05:48:28.521103] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:21.070 [2024-12-12 05:48:28.521178] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:21.070 [2024-12-12 05:48:28.521222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:21.070 [2024-12-12 05:48:28.521351] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:21.070 [2024-12-12 05:48:28.521361] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:21.070 [2024-12-12 05:48:28.521377] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:21.070 [2024-12-12 05:48:28.521433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:21.070 pt1 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.070 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.071 "name": "raid_bdev1", 00:10:21.071 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:21.071 "strip_size_kb": 0, 00:10:21.071 "state": "configuring", 00:10:21.071 "raid_level": "raid1", 00:10:21.071 "superblock": true, 00:10:21.071 "num_base_bdevs": 3, 00:10:21.071 "num_base_bdevs_discovered": 1, 00:10:21.071 "num_base_bdevs_operational": 2, 00:10:21.071 "base_bdevs_list": [ 00:10:21.071 { 00:10:21.071 "name": null, 00:10:21.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.071 "is_configured": false, 00:10:21.071 "data_offset": 2048, 00:10:21.071 "data_size": 63488 00:10:21.071 }, 00:10:21.071 { 00:10:21.071 "name": "pt2", 00:10:21.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.071 "is_configured": true, 00:10:21.071 "data_offset": 2048, 00:10:21.071 "data_size": 63488 00:10:21.071 }, 00:10:21.071 { 00:10:21.071 "name": null, 00:10:21.071 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.071 "is_configured": false, 00:10:21.071 "data_offset": 2048, 00:10:21.071 "data_size": 63488 00:10:21.071 } 00:10:21.071 ] 00:10:21.071 }' 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.071 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.640 [2024-12-12 05:48:28.986110] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:21.640 [2024-12-12 05:48:28.986224] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.640 [2024-12-12 05:48:28.986268] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:21.640 [2024-12-12 05:48:28.986296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.640 [2024-12-12 05:48:28.986861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.640 [2024-12-12 05:48:28.986921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:21.640 [2024-12-12 05:48:28.987051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:21.640 [2024-12-12 05:48:28.987105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:21.640 [2024-12-12 05:48:28.987273] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:21.640 [2024-12-12 05:48:28.987312] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:21.640 [2024-12-12 05:48:28.987598] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:21.640 [2024-12-12 05:48:28.987802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:21.640 [2024-12-12 05:48:28.987851] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:21.640 [2024-12-12 05:48:28.988058] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.640 pt3 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.640 05:48:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.640 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.640 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.640 "name": "raid_bdev1", 00:10:21.640 "uuid": "7a642d4e-d02b-4180-af96-1cbe91e63769", 00:10:21.640 "strip_size_kb": 0, 00:10:21.640 "state": "online", 00:10:21.640 "raid_level": "raid1", 00:10:21.640 "superblock": true, 00:10:21.640 "num_base_bdevs": 3, 00:10:21.640 "num_base_bdevs_discovered": 2, 00:10:21.641 "num_base_bdevs_operational": 2, 00:10:21.641 "base_bdevs_list": [ 00:10:21.641 { 00:10:21.641 "name": null, 00:10:21.641 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.641 "is_configured": false, 00:10:21.641 "data_offset": 2048, 00:10:21.641 "data_size": 63488 00:10:21.641 }, 00:10:21.641 { 00:10:21.641 "name": "pt2", 00:10:21.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.641 "is_configured": true, 00:10:21.641 "data_offset": 2048, 00:10:21.641 "data_size": 63488 00:10:21.641 }, 00:10:21.641 { 00:10:21.641 "name": "pt3", 00:10:21.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.641 "is_configured": true, 00:10:21.641 "data_offset": 2048, 00:10:21.641 "data_size": 63488 00:10:21.641 } 00:10:21.641 ] 00:10:21.641 }' 00:10:21.641 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.641 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.899 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.900 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:21.900 [2024-12-12 05:48:29.413656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 7a642d4e-d02b-4180-af96-1cbe91e63769 '!=' 7a642d4e-d02b-4180-af96-1cbe91e63769 ']' 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 69574 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 69574 ']' 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 69574 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69574 00:10:22.159 killing process with pid 69574 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69574' 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 69574 00:10:22.159 [2024-12-12 05:48:29.517848] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:22.159 [2024-12-12 05:48:29.517946] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:22.159 [2024-12-12 05:48:29.518006] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:22.159 [2024-12-12 05:48:29.518018] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:22.159 05:48:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 69574 00:10:22.418 [2024-12-12 05:48:29.804880] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.795 ************************************ 00:10:23.795 END TEST raid_superblock_test 00:10:23.795 05:48:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.795 00:10:23.795 real 0m7.535s 00:10:23.795 user 0m11.859s 00:10:23.795 sys 0m1.205s 00:10:23.795 05:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.795 05:48:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 ************************************ 00:10:23.795 05:48:30 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:23.795 05:48:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.795 05:48:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.795 05:48:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 ************************************ 00:10:23.795 START TEST raid_read_error_test 00:10:23.795 ************************************ 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.u6vl2RsPlQ 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70014 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70014 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70014 ']' 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.795 05:48:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.795 [2024-12-12 05:48:31.057070] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:23.795 [2024-12-12 05:48:31.057266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70014 ] 00:10:23.795 [2024-12-12 05:48:31.214856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.054 [2024-12-12 05:48:31.328715] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.054 [2024-12-12 05:48:31.519722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.054 [2024-12-12 05:48:31.519779] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 BaseBdev1_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 true 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 [2024-12-12 05:48:31.942678] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.621 [2024-12-12 05:48:31.942732] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.621 [2024-12-12 05:48:31.942751] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.621 [2024-12-12 05:48:31.942762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.621 [2024-12-12 05:48:31.944852] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.621 [2024-12-12 05:48:31.944894] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.621 BaseBdev1 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 BaseBdev2_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 true 00:10:24.621 05:48:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 [2024-12-12 05:48:32.007632] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.621 [2024-12-12 05:48:32.007685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.621 [2024-12-12 05:48:32.007701] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.621 [2024-12-12 05:48:32.007712] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.621 [2024-12-12 05:48:32.009775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.621 [2024-12-12 05:48:32.009815] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.621 BaseBdev2 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 BaseBdev3_malloc 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 true 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 [2024-12-12 05:48:32.090595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.621 [2024-12-12 05:48:32.090712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.621 [2024-12-12 05:48:32.090735] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.621 [2024-12-12 05:48:32.090746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.621 [2024-12-12 05:48:32.092841] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.621 [2024-12-12 05:48:32.092879] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.621 BaseBdev3 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 [2024-12-12 05:48:32.102649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.621 [2024-12-12 05:48:32.104434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.621 [2024-12-12 05:48:32.104518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.621 [2024-12-12 05:48:32.104744] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.621 [2024-12-12 05:48:32.104757] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:24.621 [2024-12-12 05:48:32.105020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:24.621 [2024-12-12 05:48:32.105186] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.621 [2024-12-12 05:48:32.105198] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.621 [2024-12-12 05:48:32.105334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.621 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.880 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.880 "name": "raid_bdev1", 00:10:24.880 "uuid": "7e83a622-9f00-45ee-bc07-9bda78bb0ae6", 00:10:24.880 "strip_size_kb": 0, 00:10:24.880 "state": "online", 00:10:24.880 "raid_level": "raid1", 00:10:24.880 "superblock": true, 00:10:24.880 "num_base_bdevs": 3, 00:10:24.880 "num_base_bdevs_discovered": 3, 00:10:24.880 "num_base_bdevs_operational": 3, 00:10:24.880 "base_bdevs_list": [ 00:10:24.880 { 00:10:24.880 "name": "BaseBdev1", 00:10:24.880 "uuid": "e7e5c1c3-6da0-560a-aca3-0911de7cc615", 00:10:24.880 "is_configured": true, 00:10:24.880 "data_offset": 2048, 00:10:24.880 "data_size": 63488 00:10:24.880 }, 00:10:24.880 { 00:10:24.880 "name": "BaseBdev2", 00:10:24.880 "uuid": "5f560668-dc66-514e-816a-1307c488b9ed", 00:10:24.880 "is_configured": true, 00:10:24.880 "data_offset": 2048, 00:10:24.880 "data_size": 63488 00:10:24.880 }, 00:10:24.880 { 00:10:24.880 "name": "BaseBdev3", 00:10:24.880 "uuid": "0446f147-a397-51b4-957b-66521a7a6150", 00:10:24.880 "is_configured": true, 00:10:24.880 "data_offset": 2048, 00:10:24.880 "data_size": 63488 00:10:24.880 } 00:10:24.880 ] 00:10:24.880 }' 00:10:24.880 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.880 05:48:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.138 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.138 05:48:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:25.138 [2024-12-12 05:48:32.619055] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.075 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.334 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.334 "name": "raid_bdev1", 00:10:26.334 "uuid": "7e83a622-9f00-45ee-bc07-9bda78bb0ae6", 00:10:26.334 "strip_size_kb": 0, 00:10:26.334 "state": "online", 00:10:26.334 "raid_level": "raid1", 00:10:26.334 "superblock": true, 00:10:26.334 "num_base_bdevs": 3, 00:10:26.334 "num_base_bdevs_discovered": 3, 00:10:26.334 "num_base_bdevs_operational": 3, 00:10:26.334 "base_bdevs_list": [ 00:10:26.334 { 00:10:26.334 "name": "BaseBdev1", 00:10:26.334 "uuid": "e7e5c1c3-6da0-560a-aca3-0911de7cc615", 00:10:26.334 "is_configured": true, 00:10:26.334 "data_offset": 2048, 00:10:26.334 "data_size": 63488 00:10:26.334 }, 00:10:26.334 { 00:10:26.334 "name": "BaseBdev2", 00:10:26.334 "uuid": "5f560668-dc66-514e-816a-1307c488b9ed", 00:10:26.334 "is_configured": true, 00:10:26.334 "data_offset": 2048, 00:10:26.334 "data_size": 63488 00:10:26.334 }, 00:10:26.334 { 00:10:26.334 "name": "BaseBdev3", 00:10:26.334 "uuid": "0446f147-a397-51b4-957b-66521a7a6150", 00:10:26.334 "is_configured": true, 00:10:26.334 "data_offset": 2048, 00:10:26.334 "data_size": 63488 00:10:26.334 } 00:10:26.334 ] 00:10:26.334 }' 00:10:26.334 05:48:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.334 05:48:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.593 [2024-12-12 05:48:34.046699] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.593 [2024-12-12 05:48:34.046732] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.593 [2024-12-12 05:48:34.049655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.593 [2024-12-12 05:48:34.049703] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.593 [2024-12-12 05:48:34.049805] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.593 [2024-12-12 05:48:34.049815] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:26.593 { 00:10:26.593 "results": [ 00:10:26.593 { 00:10:26.593 "job": "raid_bdev1", 00:10:26.593 "core_mask": "0x1", 00:10:26.593 "workload": "randrw", 00:10:26.593 "percentage": 50, 00:10:26.593 "status": "finished", 00:10:26.593 "queue_depth": 1, 00:10:26.593 "io_size": 131072, 00:10:26.593 "runtime": 1.428622, 00:10:26.593 "iops": 13419.224959436437, 00:10:26.593 "mibps": 1677.4031199295546, 00:10:26.593 "io_failed": 0, 00:10:26.593 "io_timeout": 0, 00:10:26.593 "avg_latency_us": 71.82411461634987, 00:10:26.593 "min_latency_us": 24.370305676855896, 00:10:26.593 "max_latency_us": 1523.926637554585 00:10:26.593 } 00:10:26.593 ], 00:10:26.593 "core_count": 1 00:10:26.593 } 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70014 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70014 ']' 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70014 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70014 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.593 killing process with pid 70014 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70014' 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70014 00:10:26.593 [2024-12-12 05:48:34.096219] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.593 05:48:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70014 00:10:26.851 [2024-12-12 05:48:34.330452] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.u6vl2RsPlQ 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:28.226 00:10:28.226 real 0m4.568s 00:10:28.226 user 0m5.448s 00:10:28.226 sys 0m0.540s 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.226 05:48:35 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 END TEST raid_read_error_test 00:10:28.226 ************************************ 00:10:28.226 05:48:35 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:28.226 05:48:35 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.226 05:48:35 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.226 05:48:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.226 ************************************ 00:10:28.226 START TEST raid_write_error_test 00:10:28.226 ************************************ 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.226 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LkqiibAkcd 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70160 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70160 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 70160 ']' 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.227 05:48:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.227 [2024-12-12 05:48:35.692389] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:28.227 [2024-12-12 05:48:35.692541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70160 ] 00:10:28.485 [2024-12-12 05:48:35.866323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.485 [2024-12-12 05:48:35.981852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.744 [2024-12-12 05:48:36.186853] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.744 [2024-12-12 05:48:36.186915] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 BaseBdev1_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 true 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-12-12 05:48:36.582920] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.313 [2024-12-12 05:48:36.583057] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.313 [2024-12-12 05:48:36.583086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.313 [2024-12-12 05:48:36.583099] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.313 [2024-12-12 05:48:36.585194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.313 [2024-12-12 05:48:36.585239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.313 BaseBdev1 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 BaseBdev2_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 true 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-12-12 05:48:36.653042] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.313 [2024-12-12 05:48:36.653176] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.313 [2024-12-12 05:48:36.653200] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.313 [2024-12-12 05:48:36.653212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.313 [2024-12-12 05:48:36.655357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.313 [2024-12-12 05:48:36.655399] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.313 BaseBdev2 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 BaseBdev3_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 true 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-12-12 05:48:36.732850] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.313 [2024-12-12 05:48:36.732899] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.313 [2024-12-12 05:48:36.732916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.313 [2024-12-12 05:48:36.732927] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.313 [2024-12-12 05:48:36.734990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.313 [2024-12-12 05:48:36.735031] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.313 BaseBdev3 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 [2024-12-12 05:48:36.744898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.313 [2024-12-12 05:48:36.746692] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.313 [2024-12-12 05:48:36.746763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.313 [2024-12-12 05:48:36.746976] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.313 [2024-12-12 05:48:36.746988] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.313 [2024-12-12 05:48:36.747233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:29.313 [2024-12-12 05:48:36.747383] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.313 [2024-12-12 05:48:36.747394] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.313 [2024-12-12 05:48:36.747549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.313 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.313 "name": "raid_bdev1", 00:10:29.313 "uuid": "243e3daf-1aba-4c65-a1a1-d35c9d7fa164", 00:10:29.313 "strip_size_kb": 0, 00:10:29.313 "state": "online", 00:10:29.313 "raid_level": "raid1", 00:10:29.313 "superblock": true, 00:10:29.313 "num_base_bdevs": 3, 00:10:29.313 "num_base_bdevs_discovered": 3, 00:10:29.313 "num_base_bdevs_operational": 3, 00:10:29.313 "base_bdevs_list": [ 00:10:29.313 { 00:10:29.313 "name": "BaseBdev1", 00:10:29.314 "uuid": "add2e2f9-c9e3-5931-906d-91ef265092e0", 00:10:29.314 "is_configured": true, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 }, 00:10:29.314 { 00:10:29.314 "name": "BaseBdev2", 00:10:29.314 "uuid": "c39d5bcf-ffdc-5661-8dce-7e4062310c6a", 00:10:29.314 "is_configured": true, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 }, 00:10:29.314 { 00:10:29.314 "name": "BaseBdev3", 00:10:29.314 "uuid": "fef54835-100e-5ab3-af5d-36a8fbdbeff8", 00:10:29.314 "is_configured": true, 00:10:29.314 "data_offset": 2048, 00:10:29.314 "data_size": 63488 00:10:29.314 } 00:10:29.314 ] 00:10:29.314 }' 00:10:29.314 05:48:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.314 05:48:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.887 05:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.887 05:48:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.887 [2024-12-12 05:48:37.257245] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.824 [2024-12-12 05:48:38.173051] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:30.824 [2024-12-12 05:48:38.173192] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:30.824 [2024-12-12 05:48:38.173460] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.824 "name": "raid_bdev1", 00:10:30.824 "uuid": "243e3daf-1aba-4c65-a1a1-d35c9d7fa164", 00:10:30.824 "strip_size_kb": 0, 00:10:30.824 "state": "online", 00:10:30.824 "raid_level": "raid1", 00:10:30.824 "superblock": true, 00:10:30.824 "num_base_bdevs": 3, 00:10:30.824 "num_base_bdevs_discovered": 2, 00:10:30.824 "num_base_bdevs_operational": 2, 00:10:30.824 "base_bdevs_list": [ 00:10:30.824 { 00:10:30.824 "name": null, 00:10:30.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:30.824 "is_configured": false, 00:10:30.824 "data_offset": 0, 00:10:30.824 "data_size": 63488 00:10:30.824 }, 00:10:30.824 { 00:10:30.824 "name": "BaseBdev2", 00:10:30.824 "uuid": "c39d5bcf-ffdc-5661-8dce-7e4062310c6a", 00:10:30.824 "is_configured": true, 00:10:30.824 "data_offset": 2048, 00:10:30.824 "data_size": 63488 00:10:30.824 }, 00:10:30.824 { 00:10:30.824 "name": "BaseBdev3", 00:10:30.824 "uuid": "fef54835-100e-5ab3-af5d-36a8fbdbeff8", 00:10:30.824 "is_configured": true, 00:10:30.824 "data_offset": 2048, 00:10:30.824 "data_size": 63488 00:10:30.824 } 00:10:30.824 ] 00:10:30.824 }' 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.824 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.083 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.083 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.083 [2024-12-12 05:48:38.602943] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.083 [2024-12-12 05:48:38.603045] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.341 [2024-12-12 05:48:38.606019] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.341 [2024-12-12 05:48:38.606136] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.341 [2024-12-12 05:48:38.606252] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.341 [2024-12-12 05:48:38.606316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.341 { 00:10:31.341 "results": [ 00:10:31.341 { 00:10:31.341 "job": "raid_bdev1", 00:10:31.341 "core_mask": "0x1", 00:10:31.341 "workload": "randrw", 00:10:31.341 "percentage": 50, 00:10:31.341 "status": "finished", 00:10:31.341 "queue_depth": 1, 00:10:31.341 "io_size": 131072, 00:10:31.341 "runtime": 1.346688, 00:10:31.341 "iops": 14714.618382283054, 00:10:31.341 "mibps": 1839.3272977853817, 00:10:31.341 "io_failed": 0, 00:10:31.341 "io_timeout": 0, 00:10:31.341 "avg_latency_us": 65.2288847792706, 00:10:31.341 "min_latency_us": 24.482096069868994, 00:10:31.341 "max_latency_us": 1430.9170305676855 00:10:31.341 } 00:10:31.341 ], 00:10:31.341 "core_count": 1 00:10:31.341 } 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70160 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 70160 ']' 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 70160 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70160 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.341 killing process with pid 70160 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70160' 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 70160 00:10:31.341 [2024-12-12 05:48:38.648396] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.341 05:48:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 70160 00:10:31.599 [2024-12-12 05:48:38.875090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LkqiibAkcd 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:32.975 00:10:32.975 real 0m4.479s 00:10:32.975 user 0m5.286s 00:10:32.975 sys 0m0.548s 00:10:32.975 ************************************ 00:10:32.975 END TEST raid_write_error_test 00:10:32.975 ************************************ 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.975 05:48:40 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.975 05:48:40 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:10:32.975 05:48:40 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:32.975 05:48:40 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:10:32.975 05:48:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:32.975 05:48:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.975 05:48:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:32.975 ************************************ 00:10:32.975 START TEST raid_state_function_test 00:10:32.975 ************************************ 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.975 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:32.976 Process raid pid: 70298 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=70298 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70298' 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 70298 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 70298 ']' 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.976 05:48:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.976 [2024-12-12 05:48:40.235694] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:32.976 [2024-12-12 05:48:40.235903] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.976 [2024-12-12 05:48:40.411277] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.235 [2024-12-12 05:48:40.528824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.235 [2024-12-12 05:48:40.723652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.235 [2024-12-12 05:48:40.723780] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.804 [2024-12-12 05:48:41.064305] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:33.804 [2024-12-12 05:48:41.064361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:33.804 [2024-12-12 05:48:41.064377] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.804 [2024-12-12 05:48:41.064387] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.804 [2024-12-12 05:48:41.064394] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.804 [2024-12-12 05:48:41.064403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.804 [2024-12-12 05:48:41.064409] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:33.804 [2024-12-12 05:48:41.064418] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.804 "name": "Existed_Raid", 00:10:33.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.804 "strip_size_kb": 64, 00:10:33.804 "state": "configuring", 00:10:33.804 "raid_level": "raid0", 00:10:33.804 "superblock": false, 00:10:33.804 "num_base_bdevs": 4, 00:10:33.804 "num_base_bdevs_discovered": 0, 00:10:33.804 "num_base_bdevs_operational": 4, 00:10:33.804 "base_bdevs_list": [ 00:10:33.804 { 00:10:33.804 "name": "BaseBdev1", 00:10:33.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.804 "is_configured": false, 00:10:33.804 "data_offset": 0, 00:10:33.804 "data_size": 0 00:10:33.804 }, 00:10:33.804 { 00:10:33.804 "name": "BaseBdev2", 00:10:33.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.804 "is_configured": false, 00:10:33.804 "data_offset": 0, 00:10:33.804 "data_size": 0 00:10:33.804 }, 00:10:33.804 { 00:10:33.804 "name": "BaseBdev3", 00:10:33.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.804 "is_configured": false, 00:10:33.804 "data_offset": 0, 00:10:33.804 "data_size": 0 00:10:33.804 }, 00:10:33.804 { 00:10:33.804 "name": "BaseBdev4", 00:10:33.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.804 "is_configured": false, 00:10:33.804 "data_offset": 0, 00:10:33.804 "data_size": 0 00:10:33.804 } 00:10:33.804 ] 00:10:33.804 }' 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.804 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 [2024-12-12 05:48:41.507472] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.064 [2024-12-12 05:48:41.507570] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 [2024-12-12 05:48:41.519456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.064 [2024-12-12 05:48:41.519548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.064 [2024-12-12 05:48:41.519588] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.064 [2024-12-12 05:48:41.519610] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.064 [2024-12-12 05:48:41.519628] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.064 [2024-12-12 05:48:41.519648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.064 [2024-12-12 05:48:41.519665] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.064 [2024-12-12 05:48:41.519705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 [2024-12-12 05:48:41.566342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.064 BaseBdev1 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.064 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.324 [ 00:10:34.324 { 00:10:34.324 "name": "BaseBdev1", 00:10:34.324 "aliases": [ 00:10:34.324 "f5dce19c-e600-471c-924b-4075b787fe55" 00:10:34.324 ], 00:10:34.324 "product_name": "Malloc disk", 00:10:34.324 "block_size": 512, 00:10:34.324 "num_blocks": 65536, 00:10:34.324 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:34.324 "assigned_rate_limits": { 00:10:34.324 "rw_ios_per_sec": 0, 00:10:34.324 "rw_mbytes_per_sec": 0, 00:10:34.324 "r_mbytes_per_sec": 0, 00:10:34.324 "w_mbytes_per_sec": 0 00:10:34.324 }, 00:10:34.324 "claimed": true, 00:10:34.324 "claim_type": "exclusive_write", 00:10:34.324 "zoned": false, 00:10:34.324 "supported_io_types": { 00:10:34.324 "read": true, 00:10:34.324 "write": true, 00:10:34.324 "unmap": true, 00:10:34.324 "flush": true, 00:10:34.324 "reset": true, 00:10:34.324 "nvme_admin": false, 00:10:34.324 "nvme_io": false, 00:10:34.324 "nvme_io_md": false, 00:10:34.324 "write_zeroes": true, 00:10:34.324 "zcopy": true, 00:10:34.324 "get_zone_info": false, 00:10:34.324 "zone_management": false, 00:10:34.324 "zone_append": false, 00:10:34.324 "compare": false, 00:10:34.324 "compare_and_write": false, 00:10:34.324 "abort": true, 00:10:34.324 "seek_hole": false, 00:10:34.324 "seek_data": false, 00:10:34.324 "copy": true, 00:10:34.324 "nvme_iov_md": false 00:10:34.324 }, 00:10:34.324 "memory_domains": [ 00:10:34.324 { 00:10:34.324 "dma_device_id": "system", 00:10:34.324 "dma_device_type": 1 00:10:34.324 }, 00:10:34.324 { 00:10:34.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.324 "dma_device_type": 2 00:10:34.324 } 00:10:34.324 ], 00:10:34.324 "driver_specific": {} 00:10:34.324 } 00:10:34.324 ] 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.324 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.324 "name": "Existed_Raid", 00:10:34.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.324 "strip_size_kb": 64, 00:10:34.324 "state": "configuring", 00:10:34.324 "raid_level": "raid0", 00:10:34.324 "superblock": false, 00:10:34.324 "num_base_bdevs": 4, 00:10:34.324 "num_base_bdevs_discovered": 1, 00:10:34.324 "num_base_bdevs_operational": 4, 00:10:34.324 "base_bdevs_list": [ 00:10:34.324 { 00:10:34.324 "name": "BaseBdev1", 00:10:34.324 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:34.324 "is_configured": true, 00:10:34.324 "data_offset": 0, 00:10:34.324 "data_size": 65536 00:10:34.324 }, 00:10:34.324 { 00:10:34.324 "name": "BaseBdev2", 00:10:34.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.324 "is_configured": false, 00:10:34.324 "data_offset": 0, 00:10:34.324 "data_size": 0 00:10:34.324 }, 00:10:34.324 { 00:10:34.324 "name": "BaseBdev3", 00:10:34.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.325 "is_configured": false, 00:10:34.325 "data_offset": 0, 00:10:34.325 "data_size": 0 00:10:34.325 }, 00:10:34.325 { 00:10:34.325 "name": "BaseBdev4", 00:10:34.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.325 "is_configured": false, 00:10:34.325 "data_offset": 0, 00:10:34.325 "data_size": 0 00:10:34.325 } 00:10:34.325 ] 00:10:34.325 }' 00:10:34.325 05:48:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.325 05:48:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.584 [2024-12-12 05:48:42.033571] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.584 [2024-12-12 05:48:42.033612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.584 [2024-12-12 05:48:42.045610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.584 [2024-12-12 05:48:42.047344] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.584 [2024-12-12 05:48:42.047386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.584 [2024-12-12 05:48:42.047396] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.584 [2024-12-12 05:48:42.047406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.584 [2024-12-12 05:48:42.047412] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:34.584 [2024-12-12 05:48:42.047420] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.584 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.584 "name": "Existed_Raid", 00:10:34.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.584 "strip_size_kb": 64, 00:10:34.584 "state": "configuring", 00:10:34.584 "raid_level": "raid0", 00:10:34.584 "superblock": false, 00:10:34.584 "num_base_bdevs": 4, 00:10:34.584 "num_base_bdevs_discovered": 1, 00:10:34.584 "num_base_bdevs_operational": 4, 00:10:34.584 "base_bdevs_list": [ 00:10:34.584 { 00:10:34.584 "name": "BaseBdev1", 00:10:34.584 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:34.584 "is_configured": true, 00:10:34.584 "data_offset": 0, 00:10:34.584 "data_size": 65536 00:10:34.584 }, 00:10:34.584 { 00:10:34.584 "name": "BaseBdev2", 00:10:34.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.584 "is_configured": false, 00:10:34.584 "data_offset": 0, 00:10:34.584 "data_size": 0 00:10:34.584 }, 00:10:34.584 { 00:10:34.584 "name": "BaseBdev3", 00:10:34.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.585 "is_configured": false, 00:10:34.585 "data_offset": 0, 00:10:34.585 "data_size": 0 00:10:34.585 }, 00:10:34.585 { 00:10:34.585 "name": "BaseBdev4", 00:10:34.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.585 "is_configured": false, 00:10:34.585 "data_offset": 0, 00:10:34.585 "data_size": 0 00:10:34.585 } 00:10:34.585 ] 00:10:34.585 }' 00:10:34.585 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.585 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.154 [2024-12-12 05:48:42.498194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.154 BaseBdev2 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.154 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.154 [ 00:10:35.154 { 00:10:35.154 "name": "BaseBdev2", 00:10:35.154 "aliases": [ 00:10:35.154 "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf" 00:10:35.154 ], 00:10:35.154 "product_name": "Malloc disk", 00:10:35.154 "block_size": 512, 00:10:35.154 "num_blocks": 65536, 00:10:35.154 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:35.154 "assigned_rate_limits": { 00:10:35.154 "rw_ios_per_sec": 0, 00:10:35.154 "rw_mbytes_per_sec": 0, 00:10:35.154 "r_mbytes_per_sec": 0, 00:10:35.154 "w_mbytes_per_sec": 0 00:10:35.154 }, 00:10:35.154 "claimed": true, 00:10:35.154 "claim_type": "exclusive_write", 00:10:35.154 "zoned": false, 00:10:35.154 "supported_io_types": { 00:10:35.154 "read": true, 00:10:35.154 "write": true, 00:10:35.154 "unmap": true, 00:10:35.154 "flush": true, 00:10:35.154 "reset": true, 00:10:35.154 "nvme_admin": false, 00:10:35.154 "nvme_io": false, 00:10:35.155 "nvme_io_md": false, 00:10:35.155 "write_zeroes": true, 00:10:35.155 "zcopy": true, 00:10:35.155 "get_zone_info": false, 00:10:35.155 "zone_management": false, 00:10:35.155 "zone_append": false, 00:10:35.155 "compare": false, 00:10:35.155 "compare_and_write": false, 00:10:35.155 "abort": true, 00:10:35.155 "seek_hole": false, 00:10:35.155 "seek_data": false, 00:10:35.155 "copy": true, 00:10:35.155 "nvme_iov_md": false 00:10:35.155 }, 00:10:35.155 "memory_domains": [ 00:10:35.155 { 00:10:35.155 "dma_device_id": "system", 00:10:35.155 "dma_device_type": 1 00:10:35.155 }, 00:10:35.155 { 00:10:35.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.155 "dma_device_type": 2 00:10:35.155 } 00:10:35.155 ], 00:10:35.155 "driver_specific": {} 00:10:35.155 } 00:10:35.155 ] 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.155 "name": "Existed_Raid", 00:10:35.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.155 "strip_size_kb": 64, 00:10:35.155 "state": "configuring", 00:10:35.155 "raid_level": "raid0", 00:10:35.155 "superblock": false, 00:10:35.155 "num_base_bdevs": 4, 00:10:35.155 "num_base_bdevs_discovered": 2, 00:10:35.155 "num_base_bdevs_operational": 4, 00:10:35.155 "base_bdevs_list": [ 00:10:35.155 { 00:10:35.155 "name": "BaseBdev1", 00:10:35.155 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:35.155 "is_configured": true, 00:10:35.155 "data_offset": 0, 00:10:35.155 "data_size": 65536 00:10:35.155 }, 00:10:35.155 { 00:10:35.155 "name": "BaseBdev2", 00:10:35.155 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:35.155 "is_configured": true, 00:10:35.155 "data_offset": 0, 00:10:35.155 "data_size": 65536 00:10:35.155 }, 00:10:35.155 { 00:10:35.155 "name": "BaseBdev3", 00:10:35.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.155 "is_configured": false, 00:10:35.155 "data_offset": 0, 00:10:35.155 "data_size": 0 00:10:35.155 }, 00:10:35.155 { 00:10:35.155 "name": "BaseBdev4", 00:10:35.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.155 "is_configured": false, 00:10:35.155 "data_offset": 0, 00:10:35.155 "data_size": 0 00:10:35.155 } 00:10:35.155 ] 00:10:35.155 }' 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.155 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 05:48:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.724 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.724 05:48:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 [2024-12-12 05:48:43.003935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.724 BaseBdev3 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 [ 00:10:35.724 { 00:10:35.724 "name": "BaseBdev3", 00:10:35.724 "aliases": [ 00:10:35.724 "d4d970b9-1ab1-4db1-bf81-138f0d75eea7" 00:10:35.724 ], 00:10:35.724 "product_name": "Malloc disk", 00:10:35.724 "block_size": 512, 00:10:35.724 "num_blocks": 65536, 00:10:35.724 "uuid": "d4d970b9-1ab1-4db1-bf81-138f0d75eea7", 00:10:35.724 "assigned_rate_limits": { 00:10:35.724 "rw_ios_per_sec": 0, 00:10:35.724 "rw_mbytes_per_sec": 0, 00:10:35.724 "r_mbytes_per_sec": 0, 00:10:35.724 "w_mbytes_per_sec": 0 00:10:35.724 }, 00:10:35.724 "claimed": true, 00:10:35.724 "claim_type": "exclusive_write", 00:10:35.724 "zoned": false, 00:10:35.724 "supported_io_types": { 00:10:35.724 "read": true, 00:10:35.724 "write": true, 00:10:35.724 "unmap": true, 00:10:35.724 "flush": true, 00:10:35.724 "reset": true, 00:10:35.724 "nvme_admin": false, 00:10:35.724 "nvme_io": false, 00:10:35.724 "nvme_io_md": false, 00:10:35.724 "write_zeroes": true, 00:10:35.724 "zcopy": true, 00:10:35.724 "get_zone_info": false, 00:10:35.724 "zone_management": false, 00:10:35.724 "zone_append": false, 00:10:35.724 "compare": false, 00:10:35.724 "compare_and_write": false, 00:10:35.724 "abort": true, 00:10:35.724 "seek_hole": false, 00:10:35.724 "seek_data": false, 00:10:35.724 "copy": true, 00:10:35.724 "nvme_iov_md": false 00:10:35.724 }, 00:10:35.724 "memory_domains": [ 00:10:35.724 { 00:10:35.724 "dma_device_id": "system", 00:10:35.724 "dma_device_type": 1 00:10:35.724 }, 00:10:35.724 { 00:10:35.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.724 "dma_device_type": 2 00:10:35.724 } 00:10:35.724 ], 00:10:35.724 "driver_specific": {} 00:10:35.724 } 00:10:35.724 ] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.724 "name": "Existed_Raid", 00:10:35.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.724 "strip_size_kb": 64, 00:10:35.724 "state": "configuring", 00:10:35.724 "raid_level": "raid0", 00:10:35.724 "superblock": false, 00:10:35.724 "num_base_bdevs": 4, 00:10:35.724 "num_base_bdevs_discovered": 3, 00:10:35.724 "num_base_bdevs_operational": 4, 00:10:35.724 "base_bdevs_list": [ 00:10:35.724 { 00:10:35.724 "name": "BaseBdev1", 00:10:35.724 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:35.724 "is_configured": true, 00:10:35.724 "data_offset": 0, 00:10:35.724 "data_size": 65536 00:10:35.724 }, 00:10:35.724 { 00:10:35.724 "name": "BaseBdev2", 00:10:35.724 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:35.724 "is_configured": true, 00:10:35.724 "data_offset": 0, 00:10:35.724 "data_size": 65536 00:10:35.724 }, 00:10:35.724 { 00:10:35.724 "name": "BaseBdev3", 00:10:35.724 "uuid": "d4d970b9-1ab1-4db1-bf81-138f0d75eea7", 00:10:35.724 "is_configured": true, 00:10:35.724 "data_offset": 0, 00:10:35.724 "data_size": 65536 00:10:35.724 }, 00:10:35.724 { 00:10:35.724 "name": "BaseBdev4", 00:10:35.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.724 "is_configured": false, 00:10:35.724 "data_offset": 0, 00:10:35.724 "data_size": 0 00:10:35.724 } 00:10:35.724 ] 00:10:35.724 }' 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.724 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.984 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:35.984 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.984 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.242 [2024-12-12 05:48:43.505547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:36.242 [2024-12-12 05:48:43.505588] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.242 [2024-12-12 05:48:43.505597] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:36.242 [2024-12-12 05:48:43.505871] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:36.242 [2024-12-12 05:48:43.506034] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.242 [2024-12-12 05:48:43.506062] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.242 [2024-12-12 05:48:43.506330] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.242 BaseBdev4 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.243 [ 00:10:36.243 { 00:10:36.243 "name": "BaseBdev4", 00:10:36.243 "aliases": [ 00:10:36.243 "3099fd3a-2329-4aec-92a2-daa2f39a4d06" 00:10:36.243 ], 00:10:36.243 "product_name": "Malloc disk", 00:10:36.243 "block_size": 512, 00:10:36.243 "num_blocks": 65536, 00:10:36.243 "uuid": "3099fd3a-2329-4aec-92a2-daa2f39a4d06", 00:10:36.243 "assigned_rate_limits": { 00:10:36.243 "rw_ios_per_sec": 0, 00:10:36.243 "rw_mbytes_per_sec": 0, 00:10:36.243 "r_mbytes_per_sec": 0, 00:10:36.243 "w_mbytes_per_sec": 0 00:10:36.243 }, 00:10:36.243 "claimed": true, 00:10:36.243 "claim_type": "exclusive_write", 00:10:36.243 "zoned": false, 00:10:36.243 "supported_io_types": { 00:10:36.243 "read": true, 00:10:36.243 "write": true, 00:10:36.243 "unmap": true, 00:10:36.243 "flush": true, 00:10:36.243 "reset": true, 00:10:36.243 "nvme_admin": false, 00:10:36.243 "nvme_io": false, 00:10:36.243 "nvme_io_md": false, 00:10:36.243 "write_zeroes": true, 00:10:36.243 "zcopy": true, 00:10:36.243 "get_zone_info": false, 00:10:36.243 "zone_management": false, 00:10:36.243 "zone_append": false, 00:10:36.243 "compare": false, 00:10:36.243 "compare_and_write": false, 00:10:36.243 "abort": true, 00:10:36.243 "seek_hole": false, 00:10:36.243 "seek_data": false, 00:10:36.243 "copy": true, 00:10:36.243 "nvme_iov_md": false 00:10:36.243 }, 00:10:36.243 "memory_domains": [ 00:10:36.243 { 00:10:36.243 "dma_device_id": "system", 00:10:36.243 "dma_device_type": 1 00:10:36.243 }, 00:10:36.243 { 00:10:36.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.243 "dma_device_type": 2 00:10:36.243 } 00:10:36.243 ], 00:10:36.243 "driver_specific": {} 00:10:36.243 } 00:10:36.243 ] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.243 "name": "Existed_Raid", 00:10:36.243 "uuid": "71e9463e-8423-4761-bfb4-b97ce0fe89dc", 00:10:36.243 "strip_size_kb": 64, 00:10:36.243 "state": "online", 00:10:36.243 "raid_level": "raid0", 00:10:36.243 "superblock": false, 00:10:36.243 "num_base_bdevs": 4, 00:10:36.243 "num_base_bdevs_discovered": 4, 00:10:36.243 "num_base_bdevs_operational": 4, 00:10:36.243 "base_bdevs_list": [ 00:10:36.243 { 00:10:36.243 "name": "BaseBdev1", 00:10:36.243 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:36.243 "is_configured": true, 00:10:36.243 "data_offset": 0, 00:10:36.243 "data_size": 65536 00:10:36.243 }, 00:10:36.243 { 00:10:36.243 "name": "BaseBdev2", 00:10:36.243 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:36.243 "is_configured": true, 00:10:36.243 "data_offset": 0, 00:10:36.243 "data_size": 65536 00:10:36.243 }, 00:10:36.243 { 00:10:36.243 "name": "BaseBdev3", 00:10:36.243 "uuid": "d4d970b9-1ab1-4db1-bf81-138f0d75eea7", 00:10:36.243 "is_configured": true, 00:10:36.243 "data_offset": 0, 00:10:36.243 "data_size": 65536 00:10:36.243 }, 00:10:36.243 { 00:10:36.243 "name": "BaseBdev4", 00:10:36.243 "uuid": "3099fd3a-2329-4aec-92a2-daa2f39a4d06", 00:10:36.243 "is_configured": true, 00:10:36.243 "data_offset": 0, 00:10:36.243 "data_size": 65536 00:10:36.243 } 00:10:36.243 ] 00:10:36.243 }' 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.243 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.503 [2024-12-12 05:48:43.973128] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:36.503 05:48:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.503 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:36.503 "name": "Existed_Raid", 00:10:36.503 "aliases": [ 00:10:36.503 "71e9463e-8423-4761-bfb4-b97ce0fe89dc" 00:10:36.503 ], 00:10:36.503 "product_name": "Raid Volume", 00:10:36.503 "block_size": 512, 00:10:36.503 "num_blocks": 262144, 00:10:36.503 "uuid": "71e9463e-8423-4761-bfb4-b97ce0fe89dc", 00:10:36.503 "assigned_rate_limits": { 00:10:36.503 "rw_ios_per_sec": 0, 00:10:36.503 "rw_mbytes_per_sec": 0, 00:10:36.503 "r_mbytes_per_sec": 0, 00:10:36.503 "w_mbytes_per_sec": 0 00:10:36.503 }, 00:10:36.503 "claimed": false, 00:10:36.503 "zoned": false, 00:10:36.503 "supported_io_types": { 00:10:36.503 "read": true, 00:10:36.503 "write": true, 00:10:36.503 "unmap": true, 00:10:36.503 "flush": true, 00:10:36.503 "reset": true, 00:10:36.503 "nvme_admin": false, 00:10:36.503 "nvme_io": false, 00:10:36.503 "nvme_io_md": false, 00:10:36.503 "write_zeroes": true, 00:10:36.503 "zcopy": false, 00:10:36.503 "get_zone_info": false, 00:10:36.503 "zone_management": false, 00:10:36.503 "zone_append": false, 00:10:36.503 "compare": false, 00:10:36.503 "compare_and_write": false, 00:10:36.503 "abort": false, 00:10:36.504 "seek_hole": false, 00:10:36.504 "seek_data": false, 00:10:36.504 "copy": false, 00:10:36.504 "nvme_iov_md": false 00:10:36.504 }, 00:10:36.504 "memory_domains": [ 00:10:36.504 { 00:10:36.504 "dma_device_id": "system", 00:10:36.504 "dma_device_type": 1 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.504 "dma_device_type": 2 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "system", 00:10:36.504 "dma_device_type": 1 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.504 "dma_device_type": 2 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "system", 00:10:36.504 "dma_device_type": 1 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.504 "dma_device_type": 2 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "system", 00:10:36.504 "dma_device_type": 1 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.504 "dma_device_type": 2 00:10:36.504 } 00:10:36.504 ], 00:10:36.504 "driver_specific": { 00:10:36.504 "raid": { 00:10:36.504 "uuid": "71e9463e-8423-4761-bfb4-b97ce0fe89dc", 00:10:36.504 "strip_size_kb": 64, 00:10:36.504 "state": "online", 00:10:36.504 "raid_level": "raid0", 00:10:36.504 "superblock": false, 00:10:36.504 "num_base_bdevs": 4, 00:10:36.504 "num_base_bdevs_discovered": 4, 00:10:36.504 "num_base_bdevs_operational": 4, 00:10:36.504 "base_bdevs_list": [ 00:10:36.504 { 00:10:36.504 "name": "BaseBdev1", 00:10:36.504 "uuid": "f5dce19c-e600-471c-924b-4075b787fe55", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 0, 00:10:36.504 "data_size": 65536 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev2", 00:10:36.504 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 0, 00:10:36.504 "data_size": 65536 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev3", 00:10:36.504 "uuid": "d4d970b9-1ab1-4db1-bf81-138f0d75eea7", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 0, 00:10:36.504 "data_size": 65536 00:10:36.504 }, 00:10:36.504 { 00:10:36.504 "name": "BaseBdev4", 00:10:36.504 "uuid": "3099fd3a-2329-4aec-92a2-daa2f39a4d06", 00:10:36.504 "is_configured": true, 00:10:36.504 "data_offset": 0, 00:10:36.504 "data_size": 65536 00:10:36.504 } 00:10:36.504 ] 00:10:36.504 } 00:10:36.504 } 00:10:36.504 }' 00:10:36.504 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:36.764 BaseBdev2 00:10:36.764 BaseBdev3 00:10:36.764 BaseBdev4' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.764 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.024 [2024-12-12 05:48:44.324303] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.024 [2024-12-12 05:48:44.324339] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.024 [2024-12-12 05:48:44.324389] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.024 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.024 "name": "Existed_Raid", 00:10:37.024 "uuid": "71e9463e-8423-4761-bfb4-b97ce0fe89dc", 00:10:37.024 "strip_size_kb": 64, 00:10:37.024 "state": "offline", 00:10:37.024 "raid_level": "raid0", 00:10:37.024 "superblock": false, 00:10:37.024 "num_base_bdevs": 4, 00:10:37.024 "num_base_bdevs_discovered": 3, 00:10:37.024 "num_base_bdevs_operational": 3, 00:10:37.024 "base_bdevs_list": [ 00:10:37.024 { 00:10:37.024 "name": null, 00:10:37.024 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.024 "is_configured": false, 00:10:37.024 "data_offset": 0, 00:10:37.024 "data_size": 65536 00:10:37.024 }, 00:10:37.024 { 00:10:37.024 "name": "BaseBdev2", 00:10:37.024 "uuid": "b4fdd5b5-58c9-41b8-92cb-5ead1a8decdf", 00:10:37.024 "is_configured": true, 00:10:37.024 "data_offset": 0, 00:10:37.024 "data_size": 65536 00:10:37.024 }, 00:10:37.025 { 00:10:37.025 "name": "BaseBdev3", 00:10:37.025 "uuid": "d4d970b9-1ab1-4db1-bf81-138f0d75eea7", 00:10:37.025 "is_configured": true, 00:10:37.025 "data_offset": 0, 00:10:37.025 "data_size": 65536 00:10:37.025 }, 00:10:37.025 { 00:10:37.025 "name": "BaseBdev4", 00:10:37.025 "uuid": "3099fd3a-2329-4aec-92a2-daa2f39a4d06", 00:10:37.025 "is_configured": true, 00:10:37.025 "data_offset": 0, 00:10:37.025 "data_size": 65536 00:10:37.025 } 00:10:37.025 ] 00:10:37.025 }' 00:10:37.025 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.025 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 05:48:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 [2024-12-12 05:48:44.940673] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.593 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.593 [2024-12-12 05:48:45.089555] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.851 [2024-12-12 05:48:45.239396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:37.851 [2024-12-12 05:48:45.239448] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.851 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.110 BaseBdev2 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.110 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 [ 00:10:38.111 { 00:10:38.111 "name": "BaseBdev2", 00:10:38.111 "aliases": [ 00:10:38.111 "e85d166e-48e4-4560-84d6-1ae14bd7fb8c" 00:10:38.111 ], 00:10:38.111 "product_name": "Malloc disk", 00:10:38.111 "block_size": 512, 00:10:38.111 "num_blocks": 65536, 00:10:38.111 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:38.111 "assigned_rate_limits": { 00:10:38.111 "rw_ios_per_sec": 0, 00:10:38.111 "rw_mbytes_per_sec": 0, 00:10:38.111 "r_mbytes_per_sec": 0, 00:10:38.111 "w_mbytes_per_sec": 0 00:10:38.111 }, 00:10:38.111 "claimed": false, 00:10:38.111 "zoned": false, 00:10:38.111 "supported_io_types": { 00:10:38.111 "read": true, 00:10:38.111 "write": true, 00:10:38.111 "unmap": true, 00:10:38.111 "flush": true, 00:10:38.111 "reset": true, 00:10:38.111 "nvme_admin": false, 00:10:38.111 "nvme_io": false, 00:10:38.111 "nvme_io_md": false, 00:10:38.111 "write_zeroes": true, 00:10:38.111 "zcopy": true, 00:10:38.111 "get_zone_info": false, 00:10:38.111 "zone_management": false, 00:10:38.111 "zone_append": false, 00:10:38.111 "compare": false, 00:10:38.111 "compare_and_write": false, 00:10:38.111 "abort": true, 00:10:38.111 "seek_hole": false, 00:10:38.111 "seek_data": false, 00:10:38.111 "copy": true, 00:10:38.111 "nvme_iov_md": false 00:10:38.111 }, 00:10:38.111 "memory_domains": [ 00:10:38.111 { 00:10:38.111 "dma_device_id": "system", 00:10:38.111 "dma_device_type": 1 00:10:38.111 }, 00:10:38.111 { 00:10:38.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.111 "dma_device_type": 2 00:10:38.111 } 00:10:38.111 ], 00:10:38.111 "driver_specific": {} 00:10:38.111 } 00:10:38.111 ] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 BaseBdev3 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 [ 00:10:38.111 { 00:10:38.111 "name": "BaseBdev3", 00:10:38.111 "aliases": [ 00:10:38.111 "f2441a72-d99e-40a3-9534-9e983445448b" 00:10:38.111 ], 00:10:38.111 "product_name": "Malloc disk", 00:10:38.111 "block_size": 512, 00:10:38.111 "num_blocks": 65536, 00:10:38.111 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:38.111 "assigned_rate_limits": { 00:10:38.111 "rw_ios_per_sec": 0, 00:10:38.111 "rw_mbytes_per_sec": 0, 00:10:38.111 "r_mbytes_per_sec": 0, 00:10:38.111 "w_mbytes_per_sec": 0 00:10:38.111 }, 00:10:38.111 "claimed": false, 00:10:38.111 "zoned": false, 00:10:38.111 "supported_io_types": { 00:10:38.111 "read": true, 00:10:38.111 "write": true, 00:10:38.111 "unmap": true, 00:10:38.111 "flush": true, 00:10:38.111 "reset": true, 00:10:38.111 "nvme_admin": false, 00:10:38.111 "nvme_io": false, 00:10:38.111 "nvme_io_md": false, 00:10:38.111 "write_zeroes": true, 00:10:38.111 "zcopy": true, 00:10:38.111 "get_zone_info": false, 00:10:38.111 "zone_management": false, 00:10:38.111 "zone_append": false, 00:10:38.111 "compare": false, 00:10:38.111 "compare_and_write": false, 00:10:38.111 "abort": true, 00:10:38.111 "seek_hole": false, 00:10:38.111 "seek_data": false, 00:10:38.111 "copy": true, 00:10:38.111 "nvme_iov_md": false 00:10:38.111 }, 00:10:38.111 "memory_domains": [ 00:10:38.111 { 00:10:38.111 "dma_device_id": "system", 00:10:38.111 "dma_device_type": 1 00:10:38.111 }, 00:10:38.111 { 00:10:38.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.111 "dma_device_type": 2 00:10:38.111 } 00:10:38.111 ], 00:10:38.111 "driver_specific": {} 00:10:38.111 } 00:10:38.111 ] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 BaseBdev4 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.111 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.111 [ 00:10:38.111 { 00:10:38.111 "name": "BaseBdev4", 00:10:38.111 "aliases": [ 00:10:38.111 "1103fceb-d755-45fa-bcbb-03325f8b2146" 00:10:38.111 ], 00:10:38.111 "product_name": "Malloc disk", 00:10:38.111 "block_size": 512, 00:10:38.111 "num_blocks": 65536, 00:10:38.111 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:38.111 "assigned_rate_limits": { 00:10:38.111 "rw_ios_per_sec": 0, 00:10:38.111 "rw_mbytes_per_sec": 0, 00:10:38.111 "r_mbytes_per_sec": 0, 00:10:38.111 "w_mbytes_per_sec": 0 00:10:38.111 }, 00:10:38.111 "claimed": false, 00:10:38.111 "zoned": false, 00:10:38.111 "supported_io_types": { 00:10:38.111 "read": true, 00:10:38.111 "write": true, 00:10:38.111 "unmap": true, 00:10:38.111 "flush": true, 00:10:38.111 "reset": true, 00:10:38.111 "nvme_admin": false, 00:10:38.111 "nvme_io": false, 00:10:38.111 "nvme_io_md": false, 00:10:38.111 "write_zeroes": true, 00:10:38.111 "zcopy": true, 00:10:38.111 "get_zone_info": false, 00:10:38.111 "zone_management": false, 00:10:38.111 "zone_append": false, 00:10:38.111 "compare": false, 00:10:38.112 "compare_and_write": false, 00:10:38.112 "abort": true, 00:10:38.112 "seek_hole": false, 00:10:38.112 "seek_data": false, 00:10:38.112 "copy": true, 00:10:38.112 "nvme_iov_md": false 00:10:38.112 }, 00:10:38.112 "memory_domains": [ 00:10:38.112 { 00:10:38.112 "dma_device_id": "system", 00:10:38.112 "dma_device_type": 1 00:10:38.112 }, 00:10:38.112 { 00:10:38.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.112 "dma_device_type": 2 00:10:38.112 } 00:10:38.112 ], 00:10:38.112 "driver_specific": {} 00:10:38.112 } 00:10:38.112 ] 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.112 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.371 [2024-12-12 05:48:45.636457] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.371 [2024-12-12 05:48:45.636558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.371 [2024-12-12 05:48:45.636621] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.371 [2024-12-12 05:48:45.638447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.371 [2024-12-12 05:48:45.638561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.371 "name": "Existed_Raid", 00:10:38.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.371 "strip_size_kb": 64, 00:10:38.371 "state": "configuring", 00:10:38.371 "raid_level": "raid0", 00:10:38.371 "superblock": false, 00:10:38.371 "num_base_bdevs": 4, 00:10:38.371 "num_base_bdevs_discovered": 3, 00:10:38.371 "num_base_bdevs_operational": 4, 00:10:38.371 "base_bdevs_list": [ 00:10:38.371 { 00:10:38.371 "name": "BaseBdev1", 00:10:38.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.371 "is_configured": false, 00:10:38.371 "data_offset": 0, 00:10:38.371 "data_size": 0 00:10:38.371 }, 00:10:38.371 { 00:10:38.371 "name": "BaseBdev2", 00:10:38.371 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:38.371 "is_configured": true, 00:10:38.371 "data_offset": 0, 00:10:38.371 "data_size": 65536 00:10:38.371 }, 00:10:38.371 { 00:10:38.371 "name": "BaseBdev3", 00:10:38.371 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:38.371 "is_configured": true, 00:10:38.371 "data_offset": 0, 00:10:38.371 "data_size": 65536 00:10:38.371 }, 00:10:38.371 { 00:10:38.371 "name": "BaseBdev4", 00:10:38.371 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:38.371 "is_configured": true, 00:10:38.371 "data_offset": 0, 00:10:38.371 "data_size": 65536 00:10:38.371 } 00:10:38.371 ] 00:10:38.371 }' 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.371 05:48:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.629 [2024-12-12 05:48:46.107659] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.629 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.630 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.888 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.888 "name": "Existed_Raid", 00:10:38.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.888 "strip_size_kb": 64, 00:10:38.888 "state": "configuring", 00:10:38.888 "raid_level": "raid0", 00:10:38.888 "superblock": false, 00:10:38.888 "num_base_bdevs": 4, 00:10:38.888 "num_base_bdevs_discovered": 2, 00:10:38.888 "num_base_bdevs_operational": 4, 00:10:38.888 "base_bdevs_list": [ 00:10:38.888 { 00:10:38.888 "name": "BaseBdev1", 00:10:38.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.888 "is_configured": false, 00:10:38.888 "data_offset": 0, 00:10:38.888 "data_size": 0 00:10:38.888 }, 00:10:38.888 { 00:10:38.888 "name": null, 00:10:38.888 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:38.888 "is_configured": false, 00:10:38.888 "data_offset": 0, 00:10:38.888 "data_size": 65536 00:10:38.888 }, 00:10:38.888 { 00:10:38.888 "name": "BaseBdev3", 00:10:38.888 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:38.888 "is_configured": true, 00:10:38.888 "data_offset": 0, 00:10:38.888 "data_size": 65536 00:10:38.888 }, 00:10:38.888 { 00:10:38.889 "name": "BaseBdev4", 00:10:38.889 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:38.889 "is_configured": true, 00:10:38.889 "data_offset": 0, 00:10:38.889 "data_size": 65536 00:10:38.889 } 00:10:38.889 ] 00:10:38.889 }' 00:10:38.889 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.889 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.147 [2024-12-12 05:48:46.636882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.147 BaseBdev1 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.147 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.148 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.148 [ 00:10:39.148 { 00:10:39.148 "name": "BaseBdev1", 00:10:39.148 "aliases": [ 00:10:39.148 "b2c8607c-78e8-4ce1-a142-c17dad18fcdd" 00:10:39.148 ], 00:10:39.148 "product_name": "Malloc disk", 00:10:39.148 "block_size": 512, 00:10:39.148 "num_blocks": 65536, 00:10:39.148 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:39.148 "assigned_rate_limits": { 00:10:39.148 "rw_ios_per_sec": 0, 00:10:39.148 "rw_mbytes_per_sec": 0, 00:10:39.148 "r_mbytes_per_sec": 0, 00:10:39.148 "w_mbytes_per_sec": 0 00:10:39.148 }, 00:10:39.148 "claimed": true, 00:10:39.148 "claim_type": "exclusive_write", 00:10:39.148 "zoned": false, 00:10:39.148 "supported_io_types": { 00:10:39.148 "read": true, 00:10:39.148 "write": true, 00:10:39.148 "unmap": true, 00:10:39.148 "flush": true, 00:10:39.148 "reset": true, 00:10:39.148 "nvme_admin": false, 00:10:39.148 "nvme_io": false, 00:10:39.148 "nvme_io_md": false, 00:10:39.148 "write_zeroes": true, 00:10:39.148 "zcopy": true, 00:10:39.148 "get_zone_info": false, 00:10:39.148 "zone_management": false, 00:10:39.148 "zone_append": false, 00:10:39.148 "compare": false, 00:10:39.407 "compare_and_write": false, 00:10:39.407 "abort": true, 00:10:39.407 "seek_hole": false, 00:10:39.407 "seek_data": false, 00:10:39.407 "copy": true, 00:10:39.407 "nvme_iov_md": false 00:10:39.407 }, 00:10:39.407 "memory_domains": [ 00:10:39.407 { 00:10:39.407 "dma_device_id": "system", 00:10:39.407 "dma_device_type": 1 00:10:39.407 }, 00:10:39.407 { 00:10:39.407 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.407 "dma_device_type": 2 00:10:39.407 } 00:10:39.407 ], 00:10:39.407 "driver_specific": {} 00:10:39.407 } 00:10:39.407 ] 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.407 "name": "Existed_Raid", 00:10:39.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.407 "strip_size_kb": 64, 00:10:39.407 "state": "configuring", 00:10:39.407 "raid_level": "raid0", 00:10:39.407 "superblock": false, 00:10:39.407 "num_base_bdevs": 4, 00:10:39.407 "num_base_bdevs_discovered": 3, 00:10:39.407 "num_base_bdevs_operational": 4, 00:10:39.407 "base_bdevs_list": [ 00:10:39.407 { 00:10:39.407 "name": "BaseBdev1", 00:10:39.407 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:39.407 "is_configured": true, 00:10:39.407 "data_offset": 0, 00:10:39.407 "data_size": 65536 00:10:39.407 }, 00:10:39.407 { 00:10:39.407 "name": null, 00:10:39.407 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:39.407 "is_configured": false, 00:10:39.407 "data_offset": 0, 00:10:39.407 "data_size": 65536 00:10:39.407 }, 00:10:39.407 { 00:10:39.407 "name": "BaseBdev3", 00:10:39.407 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:39.407 "is_configured": true, 00:10:39.407 "data_offset": 0, 00:10:39.407 "data_size": 65536 00:10:39.407 }, 00:10:39.407 { 00:10:39.407 "name": "BaseBdev4", 00:10:39.407 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:39.407 "is_configured": true, 00:10:39.407 "data_offset": 0, 00:10:39.407 "data_size": 65536 00:10:39.407 } 00:10:39.407 ] 00:10:39.407 }' 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.407 05:48:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 [2024-12-12 05:48:47.144082] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.666 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.925 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.925 "name": "Existed_Raid", 00:10:39.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.925 "strip_size_kb": 64, 00:10:39.925 "state": "configuring", 00:10:39.925 "raid_level": "raid0", 00:10:39.925 "superblock": false, 00:10:39.925 "num_base_bdevs": 4, 00:10:39.925 "num_base_bdevs_discovered": 2, 00:10:39.925 "num_base_bdevs_operational": 4, 00:10:39.925 "base_bdevs_list": [ 00:10:39.925 { 00:10:39.925 "name": "BaseBdev1", 00:10:39.925 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:39.925 "is_configured": true, 00:10:39.925 "data_offset": 0, 00:10:39.925 "data_size": 65536 00:10:39.925 }, 00:10:39.925 { 00:10:39.925 "name": null, 00:10:39.925 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:39.925 "is_configured": false, 00:10:39.925 "data_offset": 0, 00:10:39.925 "data_size": 65536 00:10:39.925 }, 00:10:39.925 { 00:10:39.925 "name": null, 00:10:39.925 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:39.925 "is_configured": false, 00:10:39.925 "data_offset": 0, 00:10:39.925 "data_size": 65536 00:10:39.925 }, 00:10:39.925 { 00:10:39.925 "name": "BaseBdev4", 00:10:39.925 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:39.925 "is_configured": true, 00:10:39.925 "data_offset": 0, 00:10:39.925 "data_size": 65536 00:10:39.925 } 00:10:39.925 ] 00:10:39.925 }' 00:10:39.925 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.925 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.183 [2024-12-12 05:48:47.599326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.183 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.183 "name": "Existed_Raid", 00:10:40.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.184 "strip_size_kb": 64, 00:10:40.184 "state": "configuring", 00:10:40.184 "raid_level": "raid0", 00:10:40.184 "superblock": false, 00:10:40.184 "num_base_bdevs": 4, 00:10:40.184 "num_base_bdevs_discovered": 3, 00:10:40.184 "num_base_bdevs_operational": 4, 00:10:40.184 "base_bdevs_list": [ 00:10:40.184 { 00:10:40.184 "name": "BaseBdev1", 00:10:40.184 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:40.184 "is_configured": true, 00:10:40.184 "data_offset": 0, 00:10:40.184 "data_size": 65536 00:10:40.184 }, 00:10:40.184 { 00:10:40.184 "name": null, 00:10:40.184 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:40.184 "is_configured": false, 00:10:40.184 "data_offset": 0, 00:10:40.184 "data_size": 65536 00:10:40.184 }, 00:10:40.184 { 00:10:40.184 "name": "BaseBdev3", 00:10:40.184 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:40.184 "is_configured": true, 00:10:40.184 "data_offset": 0, 00:10:40.184 "data_size": 65536 00:10:40.184 }, 00:10:40.184 { 00:10:40.184 "name": "BaseBdev4", 00:10:40.184 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:40.184 "is_configured": true, 00:10:40.184 "data_offset": 0, 00:10:40.184 "data_size": 65536 00:10:40.184 } 00:10:40.184 ] 00:10:40.184 }' 00:10:40.184 05:48:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.184 05:48:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.752 [2024-12-12 05:48:48.090561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.752 "name": "Existed_Raid", 00:10:40.752 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.752 "strip_size_kb": 64, 00:10:40.752 "state": "configuring", 00:10:40.752 "raid_level": "raid0", 00:10:40.752 "superblock": false, 00:10:40.752 "num_base_bdevs": 4, 00:10:40.752 "num_base_bdevs_discovered": 2, 00:10:40.752 "num_base_bdevs_operational": 4, 00:10:40.752 "base_bdevs_list": [ 00:10:40.752 { 00:10:40.752 "name": null, 00:10:40.752 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:40.752 "is_configured": false, 00:10:40.752 "data_offset": 0, 00:10:40.752 "data_size": 65536 00:10:40.752 }, 00:10:40.752 { 00:10:40.752 "name": null, 00:10:40.752 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:40.752 "is_configured": false, 00:10:40.752 "data_offset": 0, 00:10:40.752 "data_size": 65536 00:10:40.752 }, 00:10:40.752 { 00:10:40.752 "name": "BaseBdev3", 00:10:40.752 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:40.752 "is_configured": true, 00:10:40.752 "data_offset": 0, 00:10:40.752 "data_size": 65536 00:10:40.752 }, 00:10:40.752 { 00:10:40.752 "name": "BaseBdev4", 00:10:40.752 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:40.752 "is_configured": true, 00:10:40.752 "data_offset": 0, 00:10:40.752 "data_size": 65536 00:10:40.752 } 00:10:40.752 ] 00:10:40.752 }' 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.752 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.320 [2024-12-12 05:48:48.638742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.320 "name": "Existed_Raid", 00:10:41.320 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.320 "strip_size_kb": 64, 00:10:41.320 "state": "configuring", 00:10:41.320 "raid_level": "raid0", 00:10:41.320 "superblock": false, 00:10:41.320 "num_base_bdevs": 4, 00:10:41.320 "num_base_bdevs_discovered": 3, 00:10:41.320 "num_base_bdevs_operational": 4, 00:10:41.320 "base_bdevs_list": [ 00:10:41.320 { 00:10:41.320 "name": null, 00:10:41.320 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:41.320 "is_configured": false, 00:10:41.320 "data_offset": 0, 00:10:41.320 "data_size": 65536 00:10:41.320 }, 00:10:41.320 { 00:10:41.320 "name": "BaseBdev2", 00:10:41.320 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:41.320 "is_configured": true, 00:10:41.320 "data_offset": 0, 00:10:41.320 "data_size": 65536 00:10:41.320 }, 00:10:41.320 { 00:10:41.320 "name": "BaseBdev3", 00:10:41.320 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:41.320 "is_configured": true, 00:10:41.320 "data_offset": 0, 00:10:41.320 "data_size": 65536 00:10:41.320 }, 00:10:41.320 { 00:10:41.320 "name": "BaseBdev4", 00:10:41.320 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:41.320 "is_configured": true, 00:10:41.320 "data_offset": 0, 00:10:41.320 "data_size": 65536 00:10:41.320 } 00:10:41.320 ] 00:10:41.320 }' 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.320 05:48:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b2c8607c-78e8-4ce1-a142-c17dad18fcdd 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 [2024-12-12 05:48:49.234152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:41.888 [2024-12-12 05:48:49.234201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:41.888 [2024-12-12 05:48:49.234208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:10:41.888 [2024-12-12 05:48:49.234478] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:41.888 [2024-12-12 05:48:49.234662] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:41.888 [2024-12-12 05:48:49.234675] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:41.888 [2024-12-12 05:48:49.234911] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.888 NewBaseBdev 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 [ 00:10:41.888 { 00:10:41.888 "name": "NewBaseBdev", 00:10:41.888 "aliases": [ 00:10:41.888 "b2c8607c-78e8-4ce1-a142-c17dad18fcdd" 00:10:41.888 ], 00:10:41.888 "product_name": "Malloc disk", 00:10:41.888 "block_size": 512, 00:10:41.888 "num_blocks": 65536, 00:10:41.888 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:41.888 "assigned_rate_limits": { 00:10:41.888 "rw_ios_per_sec": 0, 00:10:41.888 "rw_mbytes_per_sec": 0, 00:10:41.888 "r_mbytes_per_sec": 0, 00:10:41.888 "w_mbytes_per_sec": 0 00:10:41.888 }, 00:10:41.888 "claimed": true, 00:10:41.888 "claim_type": "exclusive_write", 00:10:41.888 "zoned": false, 00:10:41.888 "supported_io_types": { 00:10:41.888 "read": true, 00:10:41.888 "write": true, 00:10:41.888 "unmap": true, 00:10:41.888 "flush": true, 00:10:41.888 "reset": true, 00:10:41.888 "nvme_admin": false, 00:10:41.888 "nvme_io": false, 00:10:41.888 "nvme_io_md": false, 00:10:41.888 "write_zeroes": true, 00:10:41.888 "zcopy": true, 00:10:41.888 "get_zone_info": false, 00:10:41.888 "zone_management": false, 00:10:41.888 "zone_append": false, 00:10:41.888 "compare": false, 00:10:41.888 "compare_and_write": false, 00:10:41.888 "abort": true, 00:10:41.888 "seek_hole": false, 00:10:41.888 "seek_data": false, 00:10:41.888 "copy": true, 00:10:41.888 "nvme_iov_md": false 00:10:41.888 }, 00:10:41.888 "memory_domains": [ 00:10:41.888 { 00:10:41.888 "dma_device_id": "system", 00:10:41.888 "dma_device_type": 1 00:10:41.888 }, 00:10:41.888 { 00:10:41.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.888 "dma_device_type": 2 00:10:41.888 } 00:10:41.888 ], 00:10:41.888 "driver_specific": {} 00:10:41.888 } 00:10:41.888 ] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.888 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.888 "name": "Existed_Raid", 00:10:41.889 "uuid": "bb89f73b-a7e2-42eb-86d5-2172e72e5d53", 00:10:41.889 "strip_size_kb": 64, 00:10:41.889 "state": "online", 00:10:41.889 "raid_level": "raid0", 00:10:41.889 "superblock": false, 00:10:41.889 "num_base_bdevs": 4, 00:10:41.889 "num_base_bdevs_discovered": 4, 00:10:41.889 "num_base_bdevs_operational": 4, 00:10:41.889 "base_bdevs_list": [ 00:10:41.889 { 00:10:41.889 "name": "NewBaseBdev", 00:10:41.889 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:41.889 "is_configured": true, 00:10:41.889 "data_offset": 0, 00:10:41.889 "data_size": 65536 00:10:41.889 }, 00:10:41.889 { 00:10:41.889 "name": "BaseBdev2", 00:10:41.889 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:41.889 "is_configured": true, 00:10:41.889 "data_offset": 0, 00:10:41.889 "data_size": 65536 00:10:41.889 }, 00:10:41.889 { 00:10:41.889 "name": "BaseBdev3", 00:10:41.889 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:41.889 "is_configured": true, 00:10:41.889 "data_offset": 0, 00:10:41.889 "data_size": 65536 00:10:41.889 }, 00:10:41.889 { 00:10:41.889 "name": "BaseBdev4", 00:10:41.889 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:41.889 "is_configured": true, 00:10:41.889 "data_offset": 0, 00:10:41.889 "data_size": 65536 00:10:41.889 } 00:10:41.889 ] 00:10:41.889 }' 00:10:41.889 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.889 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.456 [2024-12-12 05:48:49.689831] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.456 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:42.456 "name": "Existed_Raid", 00:10:42.456 "aliases": [ 00:10:42.456 "bb89f73b-a7e2-42eb-86d5-2172e72e5d53" 00:10:42.456 ], 00:10:42.456 "product_name": "Raid Volume", 00:10:42.456 "block_size": 512, 00:10:42.456 "num_blocks": 262144, 00:10:42.456 "uuid": "bb89f73b-a7e2-42eb-86d5-2172e72e5d53", 00:10:42.456 "assigned_rate_limits": { 00:10:42.456 "rw_ios_per_sec": 0, 00:10:42.456 "rw_mbytes_per_sec": 0, 00:10:42.456 "r_mbytes_per_sec": 0, 00:10:42.456 "w_mbytes_per_sec": 0 00:10:42.456 }, 00:10:42.456 "claimed": false, 00:10:42.456 "zoned": false, 00:10:42.456 "supported_io_types": { 00:10:42.456 "read": true, 00:10:42.456 "write": true, 00:10:42.456 "unmap": true, 00:10:42.456 "flush": true, 00:10:42.456 "reset": true, 00:10:42.456 "nvme_admin": false, 00:10:42.456 "nvme_io": false, 00:10:42.456 "nvme_io_md": false, 00:10:42.456 "write_zeroes": true, 00:10:42.457 "zcopy": false, 00:10:42.457 "get_zone_info": false, 00:10:42.457 "zone_management": false, 00:10:42.457 "zone_append": false, 00:10:42.457 "compare": false, 00:10:42.457 "compare_and_write": false, 00:10:42.457 "abort": false, 00:10:42.457 "seek_hole": false, 00:10:42.457 "seek_data": false, 00:10:42.457 "copy": false, 00:10:42.457 "nvme_iov_md": false 00:10:42.457 }, 00:10:42.457 "memory_domains": [ 00:10:42.457 { 00:10:42.457 "dma_device_id": "system", 00:10:42.457 "dma_device_type": 1 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.457 "dma_device_type": 2 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "system", 00:10:42.457 "dma_device_type": 1 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.457 "dma_device_type": 2 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "system", 00:10:42.457 "dma_device_type": 1 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.457 "dma_device_type": 2 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "system", 00:10:42.457 "dma_device_type": 1 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.457 "dma_device_type": 2 00:10:42.457 } 00:10:42.457 ], 00:10:42.457 "driver_specific": { 00:10:42.457 "raid": { 00:10:42.457 "uuid": "bb89f73b-a7e2-42eb-86d5-2172e72e5d53", 00:10:42.457 "strip_size_kb": 64, 00:10:42.457 "state": "online", 00:10:42.457 "raid_level": "raid0", 00:10:42.457 "superblock": false, 00:10:42.457 "num_base_bdevs": 4, 00:10:42.457 "num_base_bdevs_discovered": 4, 00:10:42.457 "num_base_bdevs_operational": 4, 00:10:42.457 "base_bdevs_list": [ 00:10:42.457 { 00:10:42.457 "name": "NewBaseBdev", 00:10:42.457 "uuid": "b2c8607c-78e8-4ce1-a142-c17dad18fcdd", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "name": "BaseBdev2", 00:10:42.457 "uuid": "e85d166e-48e4-4560-84d6-1ae14bd7fb8c", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "name": "BaseBdev3", 00:10:42.457 "uuid": "f2441a72-d99e-40a3-9534-9e983445448b", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 }, 00:10:42.457 { 00:10:42.457 "name": "BaseBdev4", 00:10:42.457 "uuid": "1103fceb-d755-45fa-bcbb-03325f8b2146", 00:10:42.457 "is_configured": true, 00:10:42.457 "data_offset": 0, 00:10:42.457 "data_size": 65536 00:10:42.457 } 00:10:42.457 ] 00:10:42.457 } 00:10:42.457 } 00:10:42.457 }' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:42.457 BaseBdev2 00:10:42.457 BaseBdev3 00:10:42.457 BaseBdev4' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.457 05:48:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:42.716 05:48:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.716 [2024-12-12 05:48:50.016839] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:42.716 [2024-12-12 05:48:50.016869] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.716 [2024-12-12 05:48:50.016945] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.716 [2024-12-12 05:48:50.017011] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:42.716 [2024-12-12 05:48:50.017020] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 70298 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 70298 ']' 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 70298 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70298 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70298' 00:10:42.716 killing process with pid 70298 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 70298 00:10:42.716 [2024-12-12 05:48:50.058073] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:42.716 05:48:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 70298 00:10:42.976 [2024-12-12 05:48:50.449087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:44.353 00:10:44.353 real 0m11.424s 00:10:44.353 user 0m18.165s 00:10:44.353 sys 0m2.007s 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.353 ************************************ 00:10:44.353 END TEST raid_state_function_test 00:10:44.353 ************************************ 00:10:44.353 05:48:51 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:10:44.353 05:48:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:44.353 05:48:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.353 05:48:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:44.353 ************************************ 00:10:44.353 START TEST raid_state_function_test_sb 00:10:44.353 ************************************ 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70970 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70970' 00:10:44.353 Process raid pid: 70970 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70970 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70970 ']' 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.353 05:48:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:44.353 [2024-12-12 05:48:51.730679] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:44.353 [2024-12-12 05:48:51.731462] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.612 [2024-12-12 05:48:51.906924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.612 [2024-12-12 05:48:52.022218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.870 [2024-12-12 05:48:52.231510] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.870 [2024-12-12 05:48:52.231625] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.129 [2024-12-12 05:48:52.576146] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.129 [2024-12-12 05:48:52.576274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.129 [2024-12-12 05:48:52.576291] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.129 [2024-12-12 05:48:52.576302] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.129 [2024-12-12 05:48:52.576309] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.129 [2024-12-12 05:48:52.576317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.129 [2024-12-12 05:48:52.576324] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.129 [2024-12-12 05:48:52.576333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.129 "name": "Existed_Raid", 00:10:45.129 "uuid": "c86af0d9-68f7-4815-945d-334f39800d2a", 00:10:45.129 "strip_size_kb": 64, 00:10:45.129 "state": "configuring", 00:10:45.129 "raid_level": "raid0", 00:10:45.129 "superblock": true, 00:10:45.129 "num_base_bdevs": 4, 00:10:45.129 "num_base_bdevs_discovered": 0, 00:10:45.129 "num_base_bdevs_operational": 4, 00:10:45.129 "base_bdevs_list": [ 00:10:45.129 { 00:10:45.129 "name": "BaseBdev1", 00:10:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.129 "is_configured": false, 00:10:45.129 "data_offset": 0, 00:10:45.129 "data_size": 0 00:10:45.129 }, 00:10:45.129 { 00:10:45.129 "name": "BaseBdev2", 00:10:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.129 "is_configured": false, 00:10:45.129 "data_offset": 0, 00:10:45.129 "data_size": 0 00:10:45.129 }, 00:10:45.129 { 00:10:45.129 "name": "BaseBdev3", 00:10:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.129 "is_configured": false, 00:10:45.129 "data_offset": 0, 00:10:45.129 "data_size": 0 00:10:45.129 }, 00:10:45.129 { 00:10:45.129 "name": "BaseBdev4", 00:10:45.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.129 "is_configured": false, 00:10:45.129 "data_offset": 0, 00:10:45.129 "data_size": 0 00:10:45.129 } 00:10:45.129 ] 00:10:45.129 }' 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.129 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.697 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:45.697 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.697 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.697 [2024-12-12 05:48:52.955462] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:45.697 [2024-12-12 05:48:52.955593] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:45.697 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.698 [2024-12-12 05:48:52.967436] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.698 [2024-12-12 05:48:52.967561] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.698 [2024-12-12 05:48:52.967597] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.698 [2024-12-12 05:48:52.967632] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.698 [2024-12-12 05:48:52.967663] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.698 [2024-12-12 05:48:52.967703] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.698 [2024-12-12 05:48:52.967733] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:45.698 [2024-12-12 05:48:52.967767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.698 05:48:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.698 [2024-12-12 05:48:53.016897] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:45.698 BaseBdev1 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.698 [ 00:10:45.698 { 00:10:45.698 "name": "BaseBdev1", 00:10:45.698 "aliases": [ 00:10:45.698 "3801c5d5-c0ac-435a-85e0-d62bc5948968" 00:10:45.698 ], 00:10:45.698 "product_name": "Malloc disk", 00:10:45.698 "block_size": 512, 00:10:45.698 "num_blocks": 65536, 00:10:45.698 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:45.698 "assigned_rate_limits": { 00:10:45.698 "rw_ios_per_sec": 0, 00:10:45.698 "rw_mbytes_per_sec": 0, 00:10:45.698 "r_mbytes_per_sec": 0, 00:10:45.698 "w_mbytes_per_sec": 0 00:10:45.698 }, 00:10:45.698 "claimed": true, 00:10:45.698 "claim_type": "exclusive_write", 00:10:45.698 "zoned": false, 00:10:45.698 "supported_io_types": { 00:10:45.698 "read": true, 00:10:45.698 "write": true, 00:10:45.698 "unmap": true, 00:10:45.698 "flush": true, 00:10:45.698 "reset": true, 00:10:45.698 "nvme_admin": false, 00:10:45.698 "nvme_io": false, 00:10:45.698 "nvme_io_md": false, 00:10:45.698 "write_zeroes": true, 00:10:45.698 "zcopy": true, 00:10:45.698 "get_zone_info": false, 00:10:45.698 "zone_management": false, 00:10:45.698 "zone_append": false, 00:10:45.698 "compare": false, 00:10:45.698 "compare_and_write": false, 00:10:45.698 "abort": true, 00:10:45.698 "seek_hole": false, 00:10:45.698 "seek_data": false, 00:10:45.698 "copy": true, 00:10:45.698 "nvme_iov_md": false 00:10:45.698 }, 00:10:45.698 "memory_domains": [ 00:10:45.698 { 00:10:45.698 "dma_device_id": "system", 00:10:45.698 "dma_device_type": 1 00:10:45.698 }, 00:10:45.698 { 00:10:45.698 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.698 "dma_device_type": 2 00:10:45.698 } 00:10:45.698 ], 00:10:45.698 "driver_specific": {} 00:10:45.698 } 00:10:45.698 ] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.698 "name": "Existed_Raid", 00:10:45.698 "uuid": "6608f713-8ad8-4b49-8394-51758e85297a", 00:10:45.698 "strip_size_kb": 64, 00:10:45.698 "state": "configuring", 00:10:45.698 "raid_level": "raid0", 00:10:45.698 "superblock": true, 00:10:45.698 "num_base_bdevs": 4, 00:10:45.698 "num_base_bdevs_discovered": 1, 00:10:45.698 "num_base_bdevs_operational": 4, 00:10:45.698 "base_bdevs_list": [ 00:10:45.698 { 00:10:45.698 "name": "BaseBdev1", 00:10:45.698 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:45.698 "is_configured": true, 00:10:45.698 "data_offset": 2048, 00:10:45.698 "data_size": 63488 00:10:45.698 }, 00:10:45.698 { 00:10:45.698 "name": "BaseBdev2", 00:10:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.698 "is_configured": false, 00:10:45.698 "data_offset": 0, 00:10:45.698 "data_size": 0 00:10:45.698 }, 00:10:45.698 { 00:10:45.698 "name": "BaseBdev3", 00:10:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.698 "is_configured": false, 00:10:45.698 "data_offset": 0, 00:10:45.698 "data_size": 0 00:10:45.698 }, 00:10:45.698 { 00:10:45.698 "name": "BaseBdev4", 00:10:45.698 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.698 "is_configured": false, 00:10:45.698 "data_offset": 0, 00:10:45.698 "data_size": 0 00:10:45.698 } 00:10:45.698 ] 00:10:45.698 }' 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.698 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.273 [2024-12-12 05:48:53.504162] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.273 [2024-12-12 05:48:53.504273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.273 [2024-12-12 05:48:53.512209] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.273 [2024-12-12 05:48:53.514067] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.273 [2024-12-12 05:48:53.514153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.273 [2024-12-12 05:48:53.514189] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.273 [2024-12-12 05:48:53.514226] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.273 [2024-12-12 05:48:53.514257] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:10:46.273 [2024-12-12 05:48:53.514290] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.273 "name": "Existed_Raid", 00:10:46.273 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:46.273 "strip_size_kb": 64, 00:10:46.273 "state": "configuring", 00:10:46.273 "raid_level": "raid0", 00:10:46.273 "superblock": true, 00:10:46.273 "num_base_bdevs": 4, 00:10:46.273 "num_base_bdevs_discovered": 1, 00:10:46.273 "num_base_bdevs_operational": 4, 00:10:46.273 "base_bdevs_list": [ 00:10:46.273 { 00:10:46.273 "name": "BaseBdev1", 00:10:46.273 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:46.273 "is_configured": true, 00:10:46.273 "data_offset": 2048, 00:10:46.273 "data_size": 63488 00:10:46.273 }, 00:10:46.273 { 00:10:46.273 "name": "BaseBdev2", 00:10:46.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.273 "is_configured": false, 00:10:46.273 "data_offset": 0, 00:10:46.273 "data_size": 0 00:10:46.273 }, 00:10:46.273 { 00:10:46.273 "name": "BaseBdev3", 00:10:46.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.273 "is_configured": false, 00:10:46.273 "data_offset": 0, 00:10:46.273 "data_size": 0 00:10:46.273 }, 00:10:46.273 { 00:10:46.273 "name": "BaseBdev4", 00:10:46.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.273 "is_configured": false, 00:10:46.273 "data_offset": 0, 00:10:46.273 "data_size": 0 00:10:46.273 } 00:10:46.273 ] 00:10:46.273 }' 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.273 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.544 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:46.544 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.544 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.545 [2024-12-12 05:48:53.961212] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:46.545 BaseBdev2 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.545 [ 00:10:46.545 { 00:10:46.545 "name": "BaseBdev2", 00:10:46.545 "aliases": [ 00:10:46.545 "1582a4b2-9849-409f-b4e2-aac86790bdd7" 00:10:46.545 ], 00:10:46.545 "product_name": "Malloc disk", 00:10:46.545 "block_size": 512, 00:10:46.545 "num_blocks": 65536, 00:10:46.545 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:46.545 "assigned_rate_limits": { 00:10:46.545 "rw_ios_per_sec": 0, 00:10:46.545 "rw_mbytes_per_sec": 0, 00:10:46.545 "r_mbytes_per_sec": 0, 00:10:46.545 "w_mbytes_per_sec": 0 00:10:46.545 }, 00:10:46.545 "claimed": true, 00:10:46.545 "claim_type": "exclusive_write", 00:10:46.545 "zoned": false, 00:10:46.545 "supported_io_types": { 00:10:46.545 "read": true, 00:10:46.545 "write": true, 00:10:46.545 "unmap": true, 00:10:46.545 "flush": true, 00:10:46.545 "reset": true, 00:10:46.545 "nvme_admin": false, 00:10:46.545 "nvme_io": false, 00:10:46.545 "nvme_io_md": false, 00:10:46.545 "write_zeroes": true, 00:10:46.545 "zcopy": true, 00:10:46.545 "get_zone_info": false, 00:10:46.545 "zone_management": false, 00:10:46.545 "zone_append": false, 00:10:46.545 "compare": false, 00:10:46.545 "compare_and_write": false, 00:10:46.545 "abort": true, 00:10:46.545 "seek_hole": false, 00:10:46.545 "seek_data": false, 00:10:46.545 "copy": true, 00:10:46.545 "nvme_iov_md": false 00:10:46.545 }, 00:10:46.545 "memory_domains": [ 00:10:46.545 { 00:10:46.545 "dma_device_id": "system", 00:10:46.545 "dma_device_type": 1 00:10:46.545 }, 00:10:46.545 { 00:10:46.545 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.545 "dma_device_type": 2 00:10:46.545 } 00:10:46.545 ], 00:10:46.545 "driver_specific": {} 00:10:46.545 } 00:10:46.545 ] 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:46.545 05:48:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.545 "name": "Existed_Raid", 00:10:46.545 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:46.545 "strip_size_kb": 64, 00:10:46.545 "state": "configuring", 00:10:46.545 "raid_level": "raid0", 00:10:46.545 "superblock": true, 00:10:46.545 "num_base_bdevs": 4, 00:10:46.545 "num_base_bdevs_discovered": 2, 00:10:46.545 "num_base_bdevs_operational": 4, 00:10:46.545 "base_bdevs_list": [ 00:10:46.545 { 00:10:46.545 "name": "BaseBdev1", 00:10:46.545 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:46.545 "is_configured": true, 00:10:46.545 "data_offset": 2048, 00:10:46.545 "data_size": 63488 00:10:46.545 }, 00:10:46.545 { 00:10:46.545 "name": "BaseBdev2", 00:10:46.545 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:46.545 "is_configured": true, 00:10:46.545 "data_offset": 2048, 00:10:46.545 "data_size": 63488 00:10:46.545 }, 00:10:46.545 { 00:10:46.545 "name": "BaseBdev3", 00:10:46.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.545 "is_configured": false, 00:10:46.545 "data_offset": 0, 00:10:46.545 "data_size": 0 00:10:46.545 }, 00:10:46.545 { 00:10:46.545 "name": "BaseBdev4", 00:10:46.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.545 "is_configured": false, 00:10:46.545 "data_offset": 0, 00:10:46.545 "data_size": 0 00:10:46.545 } 00:10:46.545 ] 00:10:46.545 }' 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.545 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.112 [2024-12-12 05:48:54.485287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:47.112 BaseBdev3 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.112 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.113 [ 00:10:47.113 { 00:10:47.113 "name": "BaseBdev3", 00:10:47.113 "aliases": [ 00:10:47.113 "b98908c7-5115-40d6-b108-5e0096ad0314" 00:10:47.113 ], 00:10:47.113 "product_name": "Malloc disk", 00:10:47.113 "block_size": 512, 00:10:47.113 "num_blocks": 65536, 00:10:47.113 "uuid": "b98908c7-5115-40d6-b108-5e0096ad0314", 00:10:47.113 "assigned_rate_limits": { 00:10:47.113 "rw_ios_per_sec": 0, 00:10:47.113 "rw_mbytes_per_sec": 0, 00:10:47.113 "r_mbytes_per_sec": 0, 00:10:47.113 "w_mbytes_per_sec": 0 00:10:47.113 }, 00:10:47.113 "claimed": true, 00:10:47.113 "claim_type": "exclusive_write", 00:10:47.113 "zoned": false, 00:10:47.113 "supported_io_types": { 00:10:47.113 "read": true, 00:10:47.113 "write": true, 00:10:47.113 "unmap": true, 00:10:47.113 "flush": true, 00:10:47.113 "reset": true, 00:10:47.113 "nvme_admin": false, 00:10:47.113 "nvme_io": false, 00:10:47.113 "nvme_io_md": false, 00:10:47.113 "write_zeroes": true, 00:10:47.113 "zcopy": true, 00:10:47.113 "get_zone_info": false, 00:10:47.113 "zone_management": false, 00:10:47.113 "zone_append": false, 00:10:47.113 "compare": false, 00:10:47.113 "compare_and_write": false, 00:10:47.113 "abort": true, 00:10:47.113 "seek_hole": false, 00:10:47.113 "seek_data": false, 00:10:47.113 "copy": true, 00:10:47.113 "nvme_iov_md": false 00:10:47.113 }, 00:10:47.113 "memory_domains": [ 00:10:47.113 { 00:10:47.113 "dma_device_id": "system", 00:10:47.113 "dma_device_type": 1 00:10:47.113 }, 00:10:47.113 { 00:10:47.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.113 "dma_device_type": 2 00:10:47.113 } 00:10:47.113 ], 00:10:47.113 "driver_specific": {} 00:10:47.113 } 00:10:47.113 ] 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.113 "name": "Existed_Raid", 00:10:47.113 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:47.113 "strip_size_kb": 64, 00:10:47.113 "state": "configuring", 00:10:47.113 "raid_level": "raid0", 00:10:47.113 "superblock": true, 00:10:47.113 "num_base_bdevs": 4, 00:10:47.113 "num_base_bdevs_discovered": 3, 00:10:47.113 "num_base_bdevs_operational": 4, 00:10:47.113 "base_bdevs_list": [ 00:10:47.113 { 00:10:47.113 "name": "BaseBdev1", 00:10:47.113 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:47.113 "is_configured": true, 00:10:47.113 "data_offset": 2048, 00:10:47.113 "data_size": 63488 00:10:47.113 }, 00:10:47.113 { 00:10:47.113 "name": "BaseBdev2", 00:10:47.113 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:47.113 "is_configured": true, 00:10:47.113 "data_offset": 2048, 00:10:47.113 "data_size": 63488 00:10:47.113 }, 00:10:47.113 { 00:10:47.113 "name": "BaseBdev3", 00:10:47.113 "uuid": "b98908c7-5115-40d6-b108-5e0096ad0314", 00:10:47.113 "is_configured": true, 00:10:47.113 "data_offset": 2048, 00:10:47.113 "data_size": 63488 00:10:47.113 }, 00:10:47.113 { 00:10:47.113 "name": "BaseBdev4", 00:10:47.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.113 "is_configured": false, 00:10:47.113 "data_offset": 0, 00:10:47.113 "data_size": 0 00:10:47.113 } 00:10:47.113 ] 00:10:47.113 }' 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.113 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 [2024-12-12 05:48:54.952487] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:47.683 [2024-12-12 05:48:54.952782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:47.683 [2024-12-12 05:48:54.952798] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:47.683 [2024-12-12 05:48:54.953114] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:47.683 [2024-12-12 05:48:54.953281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:47.683 [2024-12-12 05:48:54.953293] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:47.683 BaseBdev4 00:10:47.683 [2024-12-12 05:48:54.953476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 [ 00:10:47.683 { 00:10:47.683 "name": "BaseBdev4", 00:10:47.683 "aliases": [ 00:10:47.683 "581ee40d-4448-496b-b7ec-006eab954263" 00:10:47.683 ], 00:10:47.683 "product_name": "Malloc disk", 00:10:47.683 "block_size": 512, 00:10:47.683 "num_blocks": 65536, 00:10:47.683 "uuid": "581ee40d-4448-496b-b7ec-006eab954263", 00:10:47.683 "assigned_rate_limits": { 00:10:47.683 "rw_ios_per_sec": 0, 00:10:47.683 "rw_mbytes_per_sec": 0, 00:10:47.683 "r_mbytes_per_sec": 0, 00:10:47.683 "w_mbytes_per_sec": 0 00:10:47.683 }, 00:10:47.683 "claimed": true, 00:10:47.683 "claim_type": "exclusive_write", 00:10:47.683 "zoned": false, 00:10:47.683 "supported_io_types": { 00:10:47.683 "read": true, 00:10:47.683 "write": true, 00:10:47.683 "unmap": true, 00:10:47.683 "flush": true, 00:10:47.683 "reset": true, 00:10:47.683 "nvme_admin": false, 00:10:47.683 "nvme_io": false, 00:10:47.683 "nvme_io_md": false, 00:10:47.683 "write_zeroes": true, 00:10:47.683 "zcopy": true, 00:10:47.683 "get_zone_info": false, 00:10:47.683 "zone_management": false, 00:10:47.683 "zone_append": false, 00:10:47.683 "compare": false, 00:10:47.683 "compare_and_write": false, 00:10:47.683 "abort": true, 00:10:47.683 "seek_hole": false, 00:10:47.683 "seek_data": false, 00:10:47.683 "copy": true, 00:10:47.683 "nvme_iov_md": false 00:10:47.683 }, 00:10:47.683 "memory_domains": [ 00:10:47.683 { 00:10:47.683 "dma_device_id": "system", 00:10:47.683 "dma_device_type": 1 00:10:47.683 }, 00:10:47.683 { 00:10:47.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.683 "dma_device_type": 2 00:10:47.683 } 00:10:47.683 ], 00:10:47.683 "driver_specific": {} 00:10:47.683 } 00:10:47.683 ] 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.683 05:48:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.683 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.683 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.683 "name": "Existed_Raid", 00:10:47.683 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:47.683 "strip_size_kb": 64, 00:10:47.683 "state": "online", 00:10:47.683 "raid_level": "raid0", 00:10:47.683 "superblock": true, 00:10:47.683 "num_base_bdevs": 4, 00:10:47.683 "num_base_bdevs_discovered": 4, 00:10:47.683 "num_base_bdevs_operational": 4, 00:10:47.683 "base_bdevs_list": [ 00:10:47.683 { 00:10:47.683 "name": "BaseBdev1", 00:10:47.683 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:47.683 "is_configured": true, 00:10:47.683 "data_offset": 2048, 00:10:47.683 "data_size": 63488 00:10:47.683 }, 00:10:47.683 { 00:10:47.683 "name": "BaseBdev2", 00:10:47.683 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:47.683 "is_configured": true, 00:10:47.683 "data_offset": 2048, 00:10:47.683 "data_size": 63488 00:10:47.683 }, 00:10:47.683 { 00:10:47.683 "name": "BaseBdev3", 00:10:47.683 "uuid": "b98908c7-5115-40d6-b108-5e0096ad0314", 00:10:47.683 "is_configured": true, 00:10:47.683 "data_offset": 2048, 00:10:47.683 "data_size": 63488 00:10:47.683 }, 00:10:47.683 { 00:10:47.683 "name": "BaseBdev4", 00:10:47.683 "uuid": "581ee40d-4448-496b-b7ec-006eab954263", 00:10:47.683 "is_configured": true, 00:10:47.683 "data_offset": 2048, 00:10:47.683 "data_size": 63488 00:10:47.683 } 00:10:47.683 ] 00:10:47.683 }' 00:10:47.683 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.683 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:47.943 [2024-12-12 05:48:55.412057] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.943 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:47.943 "name": "Existed_Raid", 00:10:47.943 "aliases": [ 00:10:47.943 "9a439d92-8b0e-4e4a-98e2-b978defabc40" 00:10:47.943 ], 00:10:47.943 "product_name": "Raid Volume", 00:10:47.943 "block_size": 512, 00:10:47.943 "num_blocks": 253952, 00:10:47.943 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:47.943 "assigned_rate_limits": { 00:10:47.943 "rw_ios_per_sec": 0, 00:10:47.943 "rw_mbytes_per_sec": 0, 00:10:47.943 "r_mbytes_per_sec": 0, 00:10:47.943 "w_mbytes_per_sec": 0 00:10:47.943 }, 00:10:47.943 "claimed": false, 00:10:47.943 "zoned": false, 00:10:47.943 "supported_io_types": { 00:10:47.943 "read": true, 00:10:47.943 "write": true, 00:10:47.943 "unmap": true, 00:10:47.943 "flush": true, 00:10:47.943 "reset": true, 00:10:47.943 "nvme_admin": false, 00:10:47.943 "nvme_io": false, 00:10:47.943 "nvme_io_md": false, 00:10:47.943 "write_zeroes": true, 00:10:47.943 "zcopy": false, 00:10:47.943 "get_zone_info": false, 00:10:47.943 "zone_management": false, 00:10:47.943 "zone_append": false, 00:10:47.943 "compare": false, 00:10:47.943 "compare_and_write": false, 00:10:47.943 "abort": false, 00:10:47.943 "seek_hole": false, 00:10:47.943 "seek_data": false, 00:10:47.943 "copy": false, 00:10:47.943 "nvme_iov_md": false 00:10:47.943 }, 00:10:47.943 "memory_domains": [ 00:10:47.943 { 00:10:47.943 "dma_device_id": "system", 00:10:47.943 "dma_device_type": 1 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.943 "dma_device_type": 2 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "system", 00:10:47.943 "dma_device_type": 1 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.943 "dma_device_type": 2 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "system", 00:10:47.943 "dma_device_type": 1 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.943 "dma_device_type": 2 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "system", 00:10:47.943 "dma_device_type": 1 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:47.943 "dma_device_type": 2 00:10:47.943 } 00:10:47.943 ], 00:10:47.943 "driver_specific": { 00:10:47.943 "raid": { 00:10:47.943 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:47.943 "strip_size_kb": 64, 00:10:47.943 "state": "online", 00:10:47.943 "raid_level": "raid0", 00:10:47.943 "superblock": true, 00:10:47.943 "num_base_bdevs": 4, 00:10:47.943 "num_base_bdevs_discovered": 4, 00:10:47.943 "num_base_bdevs_operational": 4, 00:10:47.943 "base_bdevs_list": [ 00:10:47.943 { 00:10:47.943 "name": "BaseBdev1", 00:10:47.943 "uuid": "3801c5d5-c0ac-435a-85e0-d62bc5948968", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev2", 00:10:47.943 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev3", 00:10:47.943 "uuid": "b98908c7-5115-40d6-b108-5e0096ad0314", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 }, 00:10:47.943 { 00:10:47.943 "name": "BaseBdev4", 00:10:47.943 "uuid": "581ee40d-4448-496b-b7ec-006eab954263", 00:10:47.943 "is_configured": true, 00:10:47.943 "data_offset": 2048, 00:10:47.943 "data_size": 63488 00:10:47.943 } 00:10:47.943 ] 00:10:47.943 } 00:10:47.943 } 00:10:47.943 }' 00:10:47.944 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:48.203 BaseBdev2 00:10:48.203 BaseBdev3 00:10:48.203 BaseBdev4' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.203 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.203 [2024-12-12 05:48:55.719260] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:48.203 [2024-12-12 05:48:55.719288] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:48.203 [2024-12-12 05:48:55.719337] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.463 "name": "Existed_Raid", 00:10:48.463 "uuid": "9a439d92-8b0e-4e4a-98e2-b978defabc40", 00:10:48.463 "strip_size_kb": 64, 00:10:48.463 "state": "offline", 00:10:48.463 "raid_level": "raid0", 00:10:48.463 "superblock": true, 00:10:48.463 "num_base_bdevs": 4, 00:10:48.463 "num_base_bdevs_discovered": 3, 00:10:48.463 "num_base_bdevs_operational": 3, 00:10:48.463 "base_bdevs_list": [ 00:10:48.463 { 00:10:48.463 "name": null, 00:10:48.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.463 "is_configured": false, 00:10:48.463 "data_offset": 0, 00:10:48.463 "data_size": 63488 00:10:48.463 }, 00:10:48.463 { 00:10:48.463 "name": "BaseBdev2", 00:10:48.463 "uuid": "1582a4b2-9849-409f-b4e2-aac86790bdd7", 00:10:48.463 "is_configured": true, 00:10:48.463 "data_offset": 2048, 00:10:48.463 "data_size": 63488 00:10:48.463 }, 00:10:48.463 { 00:10:48.463 "name": "BaseBdev3", 00:10:48.463 "uuid": "b98908c7-5115-40d6-b108-5e0096ad0314", 00:10:48.463 "is_configured": true, 00:10:48.463 "data_offset": 2048, 00:10:48.463 "data_size": 63488 00:10:48.463 }, 00:10:48.463 { 00:10:48.463 "name": "BaseBdev4", 00:10:48.463 "uuid": "581ee40d-4448-496b-b7ec-006eab954263", 00:10:48.463 "is_configured": true, 00:10:48.463 "data_offset": 2048, 00:10:48.463 "data_size": 63488 00:10:48.463 } 00:10:48.463 ] 00:10:48.463 }' 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.463 05:48:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.723 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.983 [2024-12-12 05:48:56.295542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.983 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.983 [2024-12-12 05:48:56.432351] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.243 [2024-12-12 05:48:56.574668] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:10:49.243 [2024-12-12 05:48:56.574759] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.243 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.244 BaseBdev2 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.244 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 [ 00:10:49.504 { 00:10:49.504 "name": "BaseBdev2", 00:10:49.504 "aliases": [ 00:10:49.504 "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b" 00:10:49.504 ], 00:10:49.504 "product_name": "Malloc disk", 00:10:49.504 "block_size": 512, 00:10:49.504 "num_blocks": 65536, 00:10:49.504 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:49.504 "assigned_rate_limits": { 00:10:49.504 "rw_ios_per_sec": 0, 00:10:49.504 "rw_mbytes_per_sec": 0, 00:10:49.504 "r_mbytes_per_sec": 0, 00:10:49.504 "w_mbytes_per_sec": 0 00:10:49.504 }, 00:10:49.504 "claimed": false, 00:10:49.504 "zoned": false, 00:10:49.504 "supported_io_types": { 00:10:49.504 "read": true, 00:10:49.504 "write": true, 00:10:49.504 "unmap": true, 00:10:49.504 "flush": true, 00:10:49.504 "reset": true, 00:10:49.504 "nvme_admin": false, 00:10:49.504 "nvme_io": false, 00:10:49.504 "nvme_io_md": false, 00:10:49.504 "write_zeroes": true, 00:10:49.504 "zcopy": true, 00:10:49.504 "get_zone_info": false, 00:10:49.504 "zone_management": false, 00:10:49.504 "zone_append": false, 00:10:49.504 "compare": false, 00:10:49.504 "compare_and_write": false, 00:10:49.504 "abort": true, 00:10:49.504 "seek_hole": false, 00:10:49.504 "seek_data": false, 00:10:49.504 "copy": true, 00:10:49.504 "nvme_iov_md": false 00:10:49.504 }, 00:10:49.504 "memory_domains": [ 00:10:49.504 { 00:10:49.504 "dma_device_id": "system", 00:10:49.504 "dma_device_type": 1 00:10:49.504 }, 00:10:49.504 { 00:10:49.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.504 "dma_device_type": 2 00:10:49.504 } 00:10:49.504 ], 00:10:49.504 "driver_specific": {} 00:10:49.504 } 00:10:49.504 ] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 BaseBdev3 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.504 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.504 [ 00:10:49.504 { 00:10:49.504 "name": "BaseBdev3", 00:10:49.504 "aliases": [ 00:10:49.504 "592a8a51-39cd-413f-9953-dbc90d840fb3" 00:10:49.504 ], 00:10:49.504 "product_name": "Malloc disk", 00:10:49.504 "block_size": 512, 00:10:49.504 "num_blocks": 65536, 00:10:49.504 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:49.504 "assigned_rate_limits": { 00:10:49.504 "rw_ios_per_sec": 0, 00:10:49.504 "rw_mbytes_per_sec": 0, 00:10:49.504 "r_mbytes_per_sec": 0, 00:10:49.504 "w_mbytes_per_sec": 0 00:10:49.505 }, 00:10:49.505 "claimed": false, 00:10:49.505 "zoned": false, 00:10:49.505 "supported_io_types": { 00:10:49.505 "read": true, 00:10:49.505 "write": true, 00:10:49.505 "unmap": true, 00:10:49.505 "flush": true, 00:10:49.505 "reset": true, 00:10:49.505 "nvme_admin": false, 00:10:49.505 "nvme_io": false, 00:10:49.505 "nvme_io_md": false, 00:10:49.505 "write_zeroes": true, 00:10:49.505 "zcopy": true, 00:10:49.505 "get_zone_info": false, 00:10:49.505 "zone_management": false, 00:10:49.505 "zone_append": false, 00:10:49.505 "compare": false, 00:10:49.505 "compare_and_write": false, 00:10:49.505 "abort": true, 00:10:49.505 "seek_hole": false, 00:10:49.505 "seek_data": false, 00:10:49.505 "copy": true, 00:10:49.505 "nvme_iov_md": false 00:10:49.505 }, 00:10:49.505 "memory_domains": [ 00:10:49.505 { 00:10:49.505 "dma_device_id": "system", 00:10:49.505 "dma_device_type": 1 00:10:49.505 }, 00:10:49.505 { 00:10:49.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.505 "dma_device_type": 2 00:10:49.505 } 00:10:49.505 ], 00:10:49.505 "driver_specific": {} 00:10:49.505 } 00:10:49.505 ] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 BaseBdev4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 [ 00:10:49.505 { 00:10:49.505 "name": "BaseBdev4", 00:10:49.505 "aliases": [ 00:10:49.505 "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2" 00:10:49.505 ], 00:10:49.505 "product_name": "Malloc disk", 00:10:49.505 "block_size": 512, 00:10:49.505 "num_blocks": 65536, 00:10:49.505 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:49.505 "assigned_rate_limits": { 00:10:49.505 "rw_ios_per_sec": 0, 00:10:49.505 "rw_mbytes_per_sec": 0, 00:10:49.505 "r_mbytes_per_sec": 0, 00:10:49.505 "w_mbytes_per_sec": 0 00:10:49.505 }, 00:10:49.505 "claimed": false, 00:10:49.505 "zoned": false, 00:10:49.505 "supported_io_types": { 00:10:49.505 "read": true, 00:10:49.505 "write": true, 00:10:49.505 "unmap": true, 00:10:49.505 "flush": true, 00:10:49.505 "reset": true, 00:10:49.505 "nvme_admin": false, 00:10:49.505 "nvme_io": false, 00:10:49.505 "nvme_io_md": false, 00:10:49.505 "write_zeroes": true, 00:10:49.505 "zcopy": true, 00:10:49.505 "get_zone_info": false, 00:10:49.505 "zone_management": false, 00:10:49.505 "zone_append": false, 00:10:49.505 "compare": false, 00:10:49.505 "compare_and_write": false, 00:10:49.505 "abort": true, 00:10:49.505 "seek_hole": false, 00:10:49.505 "seek_data": false, 00:10:49.505 "copy": true, 00:10:49.505 "nvme_iov_md": false 00:10:49.505 }, 00:10:49.505 "memory_domains": [ 00:10:49.505 { 00:10:49.505 "dma_device_id": "system", 00:10:49.505 "dma_device_type": 1 00:10:49.505 }, 00:10:49.505 { 00:10:49.505 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.505 "dma_device_type": 2 00:10:49.505 } 00:10:49.505 ], 00:10:49.505 "driver_specific": {} 00:10:49.505 } 00:10:49.505 ] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 [2024-12-12 05:48:56.962888] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:49.505 [2024-12-12 05:48:56.962969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:49.505 [2024-12-12 05:48:56.963030] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.505 [2024-12-12 05:48:56.964806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:49.505 [2024-12-12 05:48:56.964911] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.505 05:48:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.505 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.505 "name": "Existed_Raid", 00:10:49.505 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:49.505 "strip_size_kb": 64, 00:10:49.505 "state": "configuring", 00:10:49.505 "raid_level": "raid0", 00:10:49.505 "superblock": true, 00:10:49.505 "num_base_bdevs": 4, 00:10:49.505 "num_base_bdevs_discovered": 3, 00:10:49.505 "num_base_bdevs_operational": 4, 00:10:49.505 "base_bdevs_list": [ 00:10:49.505 { 00:10:49.505 "name": "BaseBdev1", 00:10:49.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.505 "is_configured": false, 00:10:49.505 "data_offset": 0, 00:10:49.505 "data_size": 0 00:10:49.505 }, 00:10:49.505 { 00:10:49.505 "name": "BaseBdev2", 00:10:49.505 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:49.505 "is_configured": true, 00:10:49.505 "data_offset": 2048, 00:10:49.505 "data_size": 63488 00:10:49.505 }, 00:10:49.505 { 00:10:49.505 "name": "BaseBdev3", 00:10:49.505 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:49.505 "is_configured": true, 00:10:49.505 "data_offset": 2048, 00:10:49.505 "data_size": 63488 00:10:49.505 }, 00:10:49.505 { 00:10:49.505 "name": "BaseBdev4", 00:10:49.505 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:49.505 "is_configured": true, 00:10:49.505 "data_offset": 2048, 00:10:49.505 "data_size": 63488 00:10:49.505 } 00:10:49.505 ] 00:10:49.505 }' 00:10:49.505 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.505 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.075 [2024-12-12 05:48:57.406293] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.075 "name": "Existed_Raid", 00:10:50.075 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:50.075 "strip_size_kb": 64, 00:10:50.075 "state": "configuring", 00:10:50.075 "raid_level": "raid0", 00:10:50.075 "superblock": true, 00:10:50.075 "num_base_bdevs": 4, 00:10:50.075 "num_base_bdevs_discovered": 2, 00:10:50.075 "num_base_bdevs_operational": 4, 00:10:50.075 "base_bdevs_list": [ 00:10:50.075 { 00:10:50.075 "name": "BaseBdev1", 00:10:50.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.075 "is_configured": false, 00:10:50.075 "data_offset": 0, 00:10:50.075 "data_size": 0 00:10:50.075 }, 00:10:50.075 { 00:10:50.075 "name": null, 00:10:50.075 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:50.075 "is_configured": false, 00:10:50.075 "data_offset": 0, 00:10:50.075 "data_size": 63488 00:10:50.075 }, 00:10:50.075 { 00:10:50.075 "name": "BaseBdev3", 00:10:50.075 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:50.075 "is_configured": true, 00:10:50.075 "data_offset": 2048, 00:10:50.075 "data_size": 63488 00:10:50.075 }, 00:10:50.075 { 00:10:50.075 "name": "BaseBdev4", 00:10:50.075 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:50.075 "is_configured": true, 00:10:50.075 "data_offset": 2048, 00:10:50.075 "data_size": 63488 00:10:50.075 } 00:10:50.075 ] 00:10:50.075 }' 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.075 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.335 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.335 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.335 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.335 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:50.335 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.595 [2024-12-12 05:48:57.918937] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.595 BaseBdev1 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.595 [ 00:10:50.595 { 00:10:50.595 "name": "BaseBdev1", 00:10:50.595 "aliases": [ 00:10:50.595 "6796306c-fc1b-4454-8a87-447764a7111a" 00:10:50.595 ], 00:10:50.595 "product_name": "Malloc disk", 00:10:50.595 "block_size": 512, 00:10:50.595 "num_blocks": 65536, 00:10:50.595 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:50.595 "assigned_rate_limits": { 00:10:50.595 "rw_ios_per_sec": 0, 00:10:50.595 "rw_mbytes_per_sec": 0, 00:10:50.595 "r_mbytes_per_sec": 0, 00:10:50.595 "w_mbytes_per_sec": 0 00:10:50.595 }, 00:10:50.595 "claimed": true, 00:10:50.595 "claim_type": "exclusive_write", 00:10:50.595 "zoned": false, 00:10:50.595 "supported_io_types": { 00:10:50.595 "read": true, 00:10:50.595 "write": true, 00:10:50.595 "unmap": true, 00:10:50.595 "flush": true, 00:10:50.595 "reset": true, 00:10:50.595 "nvme_admin": false, 00:10:50.595 "nvme_io": false, 00:10:50.595 "nvme_io_md": false, 00:10:50.595 "write_zeroes": true, 00:10:50.595 "zcopy": true, 00:10:50.595 "get_zone_info": false, 00:10:50.595 "zone_management": false, 00:10:50.595 "zone_append": false, 00:10:50.595 "compare": false, 00:10:50.595 "compare_and_write": false, 00:10:50.595 "abort": true, 00:10:50.595 "seek_hole": false, 00:10:50.595 "seek_data": false, 00:10:50.595 "copy": true, 00:10:50.595 "nvme_iov_md": false 00:10:50.595 }, 00:10:50.595 "memory_domains": [ 00:10:50.595 { 00:10:50.595 "dma_device_id": "system", 00:10:50.595 "dma_device_type": 1 00:10:50.595 }, 00:10:50.595 { 00:10:50.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.595 "dma_device_type": 2 00:10:50.595 } 00:10:50.595 ], 00:10:50.595 "driver_specific": {} 00:10:50.595 } 00:10:50.595 ] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.595 05:48:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.595 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.595 "name": "Existed_Raid", 00:10:50.595 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:50.595 "strip_size_kb": 64, 00:10:50.595 "state": "configuring", 00:10:50.595 "raid_level": "raid0", 00:10:50.595 "superblock": true, 00:10:50.595 "num_base_bdevs": 4, 00:10:50.595 "num_base_bdevs_discovered": 3, 00:10:50.595 "num_base_bdevs_operational": 4, 00:10:50.595 "base_bdevs_list": [ 00:10:50.595 { 00:10:50.595 "name": "BaseBdev1", 00:10:50.595 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:50.595 "is_configured": true, 00:10:50.595 "data_offset": 2048, 00:10:50.595 "data_size": 63488 00:10:50.595 }, 00:10:50.595 { 00:10:50.595 "name": null, 00:10:50.595 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:50.595 "is_configured": false, 00:10:50.595 "data_offset": 0, 00:10:50.596 "data_size": 63488 00:10:50.596 }, 00:10:50.596 { 00:10:50.596 "name": "BaseBdev3", 00:10:50.596 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:50.596 "is_configured": true, 00:10:50.596 "data_offset": 2048, 00:10:50.596 "data_size": 63488 00:10:50.596 }, 00:10:50.596 { 00:10:50.596 "name": "BaseBdev4", 00:10:50.596 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:50.596 "is_configured": true, 00:10:50.596 "data_offset": 2048, 00:10:50.596 "data_size": 63488 00:10:50.596 } 00:10:50.596 ] 00:10:50.596 }' 00:10:50.596 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.596 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.855 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.855 [2024-12-12 05:48:58.374522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.115 "name": "Existed_Raid", 00:10:51.115 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:51.115 "strip_size_kb": 64, 00:10:51.115 "state": "configuring", 00:10:51.115 "raid_level": "raid0", 00:10:51.115 "superblock": true, 00:10:51.115 "num_base_bdevs": 4, 00:10:51.115 "num_base_bdevs_discovered": 2, 00:10:51.115 "num_base_bdevs_operational": 4, 00:10:51.115 "base_bdevs_list": [ 00:10:51.115 { 00:10:51.115 "name": "BaseBdev1", 00:10:51.115 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:51.115 "is_configured": true, 00:10:51.115 "data_offset": 2048, 00:10:51.115 "data_size": 63488 00:10:51.115 }, 00:10:51.115 { 00:10:51.115 "name": null, 00:10:51.115 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:51.115 "is_configured": false, 00:10:51.115 "data_offset": 0, 00:10:51.115 "data_size": 63488 00:10:51.115 }, 00:10:51.115 { 00:10:51.115 "name": null, 00:10:51.115 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:51.115 "is_configured": false, 00:10:51.115 "data_offset": 0, 00:10:51.115 "data_size": 63488 00:10:51.115 }, 00:10:51.115 { 00:10:51.115 "name": "BaseBdev4", 00:10:51.115 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:51.115 "is_configured": true, 00:10:51.115 "data_offset": 2048, 00:10:51.115 "data_size": 63488 00:10:51.115 } 00:10:51.115 ] 00:10:51.115 }' 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.115 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.375 [2024-12-12 05:48:58.849758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.375 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.375 "name": "Existed_Raid", 00:10:51.375 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:51.375 "strip_size_kb": 64, 00:10:51.375 "state": "configuring", 00:10:51.375 "raid_level": "raid0", 00:10:51.375 "superblock": true, 00:10:51.375 "num_base_bdevs": 4, 00:10:51.375 "num_base_bdevs_discovered": 3, 00:10:51.375 "num_base_bdevs_operational": 4, 00:10:51.375 "base_bdevs_list": [ 00:10:51.375 { 00:10:51.375 "name": "BaseBdev1", 00:10:51.375 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:51.375 "is_configured": true, 00:10:51.375 "data_offset": 2048, 00:10:51.375 "data_size": 63488 00:10:51.375 }, 00:10:51.375 { 00:10:51.375 "name": null, 00:10:51.375 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:51.375 "is_configured": false, 00:10:51.375 "data_offset": 0, 00:10:51.375 "data_size": 63488 00:10:51.375 }, 00:10:51.376 { 00:10:51.376 "name": "BaseBdev3", 00:10:51.376 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:51.376 "is_configured": true, 00:10:51.376 "data_offset": 2048, 00:10:51.376 "data_size": 63488 00:10:51.376 }, 00:10:51.376 { 00:10:51.376 "name": "BaseBdev4", 00:10:51.376 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:51.376 "is_configured": true, 00:10:51.376 "data_offset": 2048, 00:10:51.376 "data_size": 63488 00:10:51.376 } 00:10:51.376 ] 00:10:51.376 }' 00:10:51.376 05:48:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.376 05:48:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 [2024-12-12 05:48:59.261083] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.944 "name": "Existed_Raid", 00:10:51.944 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:51.944 "strip_size_kb": 64, 00:10:51.944 "state": "configuring", 00:10:51.944 "raid_level": "raid0", 00:10:51.944 "superblock": true, 00:10:51.944 "num_base_bdevs": 4, 00:10:51.944 "num_base_bdevs_discovered": 2, 00:10:51.944 "num_base_bdevs_operational": 4, 00:10:51.944 "base_bdevs_list": [ 00:10:51.944 { 00:10:51.944 "name": null, 00:10:51.944 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:51.944 "is_configured": false, 00:10:51.944 "data_offset": 0, 00:10:51.944 "data_size": 63488 00:10:51.944 }, 00:10:51.944 { 00:10:51.944 "name": null, 00:10:51.944 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:51.944 "is_configured": false, 00:10:51.944 "data_offset": 0, 00:10:51.944 "data_size": 63488 00:10:51.944 }, 00:10:51.944 { 00:10:51.944 "name": "BaseBdev3", 00:10:51.944 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:51.944 "is_configured": true, 00:10:51.944 "data_offset": 2048, 00:10:51.944 "data_size": 63488 00:10:51.944 }, 00:10:51.944 { 00:10:51.944 "name": "BaseBdev4", 00:10:51.944 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:51.944 "is_configured": true, 00:10:51.944 "data_offset": 2048, 00:10:51.944 "data_size": 63488 00:10:51.944 } 00:10:51.944 ] 00:10:51.944 }' 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.944 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 [2024-12-12 05:48:59.818638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.512 "name": "Existed_Raid", 00:10:52.512 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:52.512 "strip_size_kb": 64, 00:10:52.512 "state": "configuring", 00:10:52.512 "raid_level": "raid0", 00:10:52.512 "superblock": true, 00:10:52.512 "num_base_bdevs": 4, 00:10:52.512 "num_base_bdevs_discovered": 3, 00:10:52.512 "num_base_bdevs_operational": 4, 00:10:52.512 "base_bdevs_list": [ 00:10:52.512 { 00:10:52.512 "name": null, 00:10:52.512 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:52.512 "is_configured": false, 00:10:52.512 "data_offset": 0, 00:10:52.512 "data_size": 63488 00:10:52.512 }, 00:10:52.512 { 00:10:52.512 "name": "BaseBdev2", 00:10:52.512 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:52.512 "is_configured": true, 00:10:52.512 "data_offset": 2048, 00:10:52.512 "data_size": 63488 00:10:52.512 }, 00:10:52.512 { 00:10:52.512 "name": "BaseBdev3", 00:10:52.512 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:52.512 "is_configured": true, 00:10:52.512 "data_offset": 2048, 00:10:52.512 "data_size": 63488 00:10:52.512 }, 00:10:52.512 { 00:10:52.512 "name": "BaseBdev4", 00:10:52.512 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:52.512 "is_configured": true, 00:10:52.512 "data_offset": 2048, 00:10:52.512 "data_size": 63488 00:10:52.512 } 00:10:52.512 ] 00:10:52.512 }' 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.512 05:48:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6796306c-fc1b-4454-8a87-447764a7111a 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.772 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.038 [2024-12-12 05:49:00.330660] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:53.038 [2024-12-12 05:49:00.331019] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:53.038 [2024-12-12 05:49:00.331071] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:53.038 [2024-12-12 05:49:00.331399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:10:53.038 [2024-12-12 05:49:00.331600] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:53.038 [2024-12-12 05:49:00.331647] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:53.038 NewBaseBdev 00:10:53.038 [2024-12-12 05:49:00.331854] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.038 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.038 [ 00:10:53.038 { 00:10:53.038 "name": "NewBaseBdev", 00:10:53.038 "aliases": [ 00:10:53.038 "6796306c-fc1b-4454-8a87-447764a7111a" 00:10:53.038 ], 00:10:53.038 "product_name": "Malloc disk", 00:10:53.039 "block_size": 512, 00:10:53.039 "num_blocks": 65536, 00:10:53.039 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:53.039 "assigned_rate_limits": { 00:10:53.039 "rw_ios_per_sec": 0, 00:10:53.039 "rw_mbytes_per_sec": 0, 00:10:53.039 "r_mbytes_per_sec": 0, 00:10:53.039 "w_mbytes_per_sec": 0 00:10:53.039 }, 00:10:53.039 "claimed": true, 00:10:53.039 "claim_type": "exclusive_write", 00:10:53.039 "zoned": false, 00:10:53.039 "supported_io_types": { 00:10:53.039 "read": true, 00:10:53.039 "write": true, 00:10:53.039 "unmap": true, 00:10:53.039 "flush": true, 00:10:53.039 "reset": true, 00:10:53.039 "nvme_admin": false, 00:10:53.039 "nvme_io": false, 00:10:53.039 "nvme_io_md": false, 00:10:53.039 "write_zeroes": true, 00:10:53.039 "zcopy": true, 00:10:53.039 "get_zone_info": false, 00:10:53.039 "zone_management": false, 00:10:53.039 "zone_append": false, 00:10:53.039 "compare": false, 00:10:53.039 "compare_and_write": false, 00:10:53.039 "abort": true, 00:10:53.039 "seek_hole": false, 00:10:53.039 "seek_data": false, 00:10:53.039 "copy": true, 00:10:53.039 "nvme_iov_md": false 00:10:53.039 }, 00:10:53.039 "memory_domains": [ 00:10:53.039 { 00:10:53.039 "dma_device_id": "system", 00:10:53.039 "dma_device_type": 1 00:10:53.039 }, 00:10:53.039 { 00:10:53.039 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.039 "dma_device_type": 2 00:10:53.039 } 00:10:53.039 ], 00:10:53.039 "driver_specific": {} 00:10:53.039 } 00:10:53.039 ] 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.039 "name": "Existed_Raid", 00:10:53.039 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:53.039 "strip_size_kb": 64, 00:10:53.039 "state": "online", 00:10:53.039 "raid_level": "raid0", 00:10:53.039 "superblock": true, 00:10:53.039 "num_base_bdevs": 4, 00:10:53.039 "num_base_bdevs_discovered": 4, 00:10:53.039 "num_base_bdevs_operational": 4, 00:10:53.039 "base_bdevs_list": [ 00:10:53.039 { 00:10:53.039 "name": "NewBaseBdev", 00:10:53.039 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:53.039 "is_configured": true, 00:10:53.039 "data_offset": 2048, 00:10:53.039 "data_size": 63488 00:10:53.039 }, 00:10:53.039 { 00:10:53.039 "name": "BaseBdev2", 00:10:53.039 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:53.039 "is_configured": true, 00:10:53.039 "data_offset": 2048, 00:10:53.039 "data_size": 63488 00:10:53.039 }, 00:10:53.039 { 00:10:53.039 "name": "BaseBdev3", 00:10:53.039 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:53.039 "is_configured": true, 00:10:53.039 "data_offset": 2048, 00:10:53.039 "data_size": 63488 00:10:53.039 }, 00:10:53.039 { 00:10:53.039 "name": "BaseBdev4", 00:10:53.039 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:53.039 "is_configured": true, 00:10:53.039 "data_offset": 2048, 00:10:53.039 "data_size": 63488 00:10:53.039 } 00:10:53.039 ] 00:10:53.039 }' 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.039 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.321 [2024-12-12 05:49:00.814409] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.321 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.581 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:53.581 "name": "Existed_Raid", 00:10:53.581 "aliases": [ 00:10:53.581 "6934948b-aa09-4bd6-8fd2-9da4708d7b98" 00:10:53.582 ], 00:10:53.582 "product_name": "Raid Volume", 00:10:53.582 "block_size": 512, 00:10:53.582 "num_blocks": 253952, 00:10:53.582 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:53.582 "assigned_rate_limits": { 00:10:53.582 "rw_ios_per_sec": 0, 00:10:53.582 "rw_mbytes_per_sec": 0, 00:10:53.582 "r_mbytes_per_sec": 0, 00:10:53.582 "w_mbytes_per_sec": 0 00:10:53.582 }, 00:10:53.582 "claimed": false, 00:10:53.582 "zoned": false, 00:10:53.582 "supported_io_types": { 00:10:53.582 "read": true, 00:10:53.582 "write": true, 00:10:53.582 "unmap": true, 00:10:53.582 "flush": true, 00:10:53.582 "reset": true, 00:10:53.582 "nvme_admin": false, 00:10:53.582 "nvme_io": false, 00:10:53.582 "nvme_io_md": false, 00:10:53.582 "write_zeroes": true, 00:10:53.582 "zcopy": false, 00:10:53.582 "get_zone_info": false, 00:10:53.582 "zone_management": false, 00:10:53.582 "zone_append": false, 00:10:53.582 "compare": false, 00:10:53.582 "compare_and_write": false, 00:10:53.582 "abort": false, 00:10:53.582 "seek_hole": false, 00:10:53.582 "seek_data": false, 00:10:53.582 "copy": false, 00:10:53.582 "nvme_iov_md": false 00:10:53.582 }, 00:10:53.582 "memory_domains": [ 00:10:53.582 { 00:10:53.582 "dma_device_id": "system", 00:10:53.582 "dma_device_type": 1 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.582 "dma_device_type": 2 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "system", 00:10:53.582 "dma_device_type": 1 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.582 "dma_device_type": 2 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "system", 00:10:53.582 "dma_device_type": 1 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.582 "dma_device_type": 2 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "system", 00:10:53.582 "dma_device_type": 1 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.582 "dma_device_type": 2 00:10:53.582 } 00:10:53.582 ], 00:10:53.582 "driver_specific": { 00:10:53.582 "raid": { 00:10:53.582 "uuid": "6934948b-aa09-4bd6-8fd2-9da4708d7b98", 00:10:53.582 "strip_size_kb": 64, 00:10:53.582 "state": "online", 00:10:53.582 "raid_level": "raid0", 00:10:53.582 "superblock": true, 00:10:53.582 "num_base_bdevs": 4, 00:10:53.582 "num_base_bdevs_discovered": 4, 00:10:53.582 "num_base_bdevs_operational": 4, 00:10:53.582 "base_bdevs_list": [ 00:10:53.582 { 00:10:53.582 "name": "NewBaseBdev", 00:10:53.582 "uuid": "6796306c-fc1b-4454-8a87-447764a7111a", 00:10:53.582 "is_configured": true, 00:10:53.582 "data_offset": 2048, 00:10:53.582 "data_size": 63488 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "name": "BaseBdev2", 00:10:53.582 "uuid": "4cd9752d-ecd9-47d6-af3d-0b39bcbd3c2b", 00:10:53.582 "is_configured": true, 00:10:53.582 "data_offset": 2048, 00:10:53.582 "data_size": 63488 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "name": "BaseBdev3", 00:10:53.582 "uuid": "592a8a51-39cd-413f-9953-dbc90d840fb3", 00:10:53.582 "is_configured": true, 00:10:53.582 "data_offset": 2048, 00:10:53.582 "data_size": 63488 00:10:53.582 }, 00:10:53.582 { 00:10:53.582 "name": "BaseBdev4", 00:10:53.582 "uuid": "23d42bfe-685c-4e54-bc0b-3ab15bd5a2c2", 00:10:53.582 "is_configured": true, 00:10:53.582 "data_offset": 2048, 00:10:53.582 "data_size": 63488 00:10:53.582 } 00:10:53.582 ] 00:10:53.582 } 00:10:53.582 } 00:10:53.582 }' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:53.582 BaseBdev2 00:10:53.582 BaseBdev3 00:10:53.582 BaseBdev4' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.582 05:49:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.582 [2024-12-12 05:49:01.093579] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:53.582 [2024-12-12 05:49:01.093610] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.582 [2024-12-12 05:49:01.093690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.582 [2024-12-12 05:49:01.093757] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.582 [2024-12-12 05:49:01.093766] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70970 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70970 ']' 00:10:53.582 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70970 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70970 00:10:53.842 killing process with pid 70970 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70970' 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70970 00:10:53.842 [2024-12-12 05:49:01.128452] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.842 05:49:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70970 00:10:54.102 [2024-12-12 05:49:01.513629] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.483 05:49:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:55.484 00:10:55.484 real 0m10.968s 00:10:55.484 user 0m17.409s 00:10:55.484 sys 0m1.912s 00:10:55.484 05:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:55.484 ************************************ 00:10:55.484 END TEST raid_state_function_test_sb 00:10:55.484 ************************************ 00:10:55.484 05:49:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 05:49:02 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:10:55.484 05:49:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:55.484 05:49:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:55.484 05:49:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 ************************************ 00:10:55.484 START TEST raid_superblock_test 00:10:55.484 ************************************ 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=71637 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 71637 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 71637 ']' 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.484 05:49:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.484 [2024-12-12 05:49:02.753909] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:10:55.484 [2024-12-12 05:49:02.754029] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71637 ] 00:10:55.484 [2024-12-12 05:49:02.920169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.744 [2024-12-12 05:49:03.024782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.744 [2024-12-12 05:49:03.222291] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.744 [2024-12-12 05:49:03.222321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.315 malloc1 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.315 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.315 [2024-12-12 05:49:03.619921] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:56.316 [2024-12-12 05:49:03.620037] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.316 [2024-12-12 05:49:03.620133] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:56.316 [2024-12-12 05:49:03.620179] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.316 [2024-12-12 05:49:03.622378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.316 [2024-12-12 05:49:03.622450] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:56.316 pt1 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 malloc2 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 [2024-12-12 05:49:03.673620] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:56.316 [2024-12-12 05:49:03.673714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.316 [2024-12-12 05:49:03.673757] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:56.316 [2024-12-12 05:49:03.673766] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.316 [2024-12-12 05:49:03.675812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.316 [2024-12-12 05:49:03.675846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:56.316 pt2 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 malloc3 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 [2024-12-12 05:49:03.739102] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:56.316 [2024-12-12 05:49:03.739196] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.316 [2024-12-12 05:49:03.739235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:56.316 [2024-12-12 05:49:03.739264] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.316 [2024-12-12 05:49:03.741321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.316 [2024-12-12 05:49:03.741387] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:56.316 pt3 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 malloc4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 [2024-12-12 05:49:03.795510] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:56.316 [2024-12-12 05:49:03.795601] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.316 [2024-12-12 05:49:03.795655] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:56.316 [2024-12-12 05:49:03.795682] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.316 [2024-12-12 05:49:03.797709] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.316 [2024-12-12 05:49:03.797774] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:56.316 pt4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.316 [2024-12-12 05:49:03.807523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:56.316 [2024-12-12 05:49:03.809269] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:56.316 [2024-12-12 05:49:03.809406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:56.316 [2024-12-12 05:49:03.809477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:56.316 [2024-12-12 05:49:03.809704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:56.316 [2024-12-12 05:49:03.809750] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:56.316 [2024-12-12 05:49:03.810048] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:56.316 [2024-12-12 05:49:03.810262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:56.316 [2024-12-12 05:49:03.810308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:56.316 [2024-12-12 05:49:03.810527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.316 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.577 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.577 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.577 "name": "raid_bdev1", 00:10:56.577 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:56.577 "strip_size_kb": 64, 00:10:56.577 "state": "online", 00:10:56.577 "raid_level": "raid0", 00:10:56.577 "superblock": true, 00:10:56.577 "num_base_bdevs": 4, 00:10:56.577 "num_base_bdevs_discovered": 4, 00:10:56.577 "num_base_bdevs_operational": 4, 00:10:56.577 "base_bdevs_list": [ 00:10:56.577 { 00:10:56.577 "name": "pt1", 00:10:56.577 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.577 "is_configured": true, 00:10:56.577 "data_offset": 2048, 00:10:56.577 "data_size": 63488 00:10:56.577 }, 00:10:56.577 { 00:10:56.577 "name": "pt2", 00:10:56.577 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.577 "is_configured": true, 00:10:56.577 "data_offset": 2048, 00:10:56.577 "data_size": 63488 00:10:56.577 }, 00:10:56.577 { 00:10:56.577 "name": "pt3", 00:10:56.577 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.577 "is_configured": true, 00:10:56.577 "data_offset": 2048, 00:10:56.577 "data_size": 63488 00:10:56.577 }, 00:10:56.577 { 00:10:56.577 "name": "pt4", 00:10:56.577 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.577 "is_configured": true, 00:10:56.577 "data_offset": 2048, 00:10:56.577 "data_size": 63488 00:10:56.577 } 00:10:56.577 ] 00:10:56.577 }' 00:10:56.577 05:49:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.577 05:49:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.837 [2024-12-12 05:49:04.243095] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.837 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:56.837 "name": "raid_bdev1", 00:10:56.837 "aliases": [ 00:10:56.837 "93292078-f20b-4b03-b730-e56c42165b8a" 00:10:56.837 ], 00:10:56.837 "product_name": "Raid Volume", 00:10:56.837 "block_size": 512, 00:10:56.837 "num_blocks": 253952, 00:10:56.837 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:56.837 "assigned_rate_limits": { 00:10:56.837 "rw_ios_per_sec": 0, 00:10:56.837 "rw_mbytes_per_sec": 0, 00:10:56.837 "r_mbytes_per_sec": 0, 00:10:56.837 "w_mbytes_per_sec": 0 00:10:56.837 }, 00:10:56.837 "claimed": false, 00:10:56.837 "zoned": false, 00:10:56.837 "supported_io_types": { 00:10:56.837 "read": true, 00:10:56.837 "write": true, 00:10:56.837 "unmap": true, 00:10:56.837 "flush": true, 00:10:56.837 "reset": true, 00:10:56.837 "nvme_admin": false, 00:10:56.837 "nvme_io": false, 00:10:56.837 "nvme_io_md": false, 00:10:56.837 "write_zeroes": true, 00:10:56.837 "zcopy": false, 00:10:56.837 "get_zone_info": false, 00:10:56.837 "zone_management": false, 00:10:56.837 "zone_append": false, 00:10:56.837 "compare": false, 00:10:56.837 "compare_and_write": false, 00:10:56.837 "abort": false, 00:10:56.837 "seek_hole": false, 00:10:56.837 "seek_data": false, 00:10:56.837 "copy": false, 00:10:56.837 "nvme_iov_md": false 00:10:56.837 }, 00:10:56.837 "memory_domains": [ 00:10:56.837 { 00:10:56.837 "dma_device_id": "system", 00:10:56.837 "dma_device_type": 1 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.837 "dma_device_type": 2 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "system", 00:10:56.837 "dma_device_type": 1 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.837 "dma_device_type": 2 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "system", 00:10:56.837 "dma_device_type": 1 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.837 "dma_device_type": 2 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "system", 00:10:56.837 "dma_device_type": 1 00:10:56.837 }, 00:10:56.837 { 00:10:56.837 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.837 "dma_device_type": 2 00:10:56.837 } 00:10:56.837 ], 00:10:56.837 "driver_specific": { 00:10:56.837 "raid": { 00:10:56.837 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:56.837 "strip_size_kb": 64, 00:10:56.837 "state": "online", 00:10:56.837 "raid_level": "raid0", 00:10:56.837 "superblock": true, 00:10:56.837 "num_base_bdevs": 4, 00:10:56.837 "num_base_bdevs_discovered": 4, 00:10:56.838 "num_base_bdevs_operational": 4, 00:10:56.838 "base_bdevs_list": [ 00:10:56.838 { 00:10:56.838 "name": "pt1", 00:10:56.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:56.838 "is_configured": true, 00:10:56.838 "data_offset": 2048, 00:10:56.838 "data_size": 63488 00:10:56.838 }, 00:10:56.838 { 00:10:56.838 "name": "pt2", 00:10:56.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:56.838 "is_configured": true, 00:10:56.838 "data_offset": 2048, 00:10:56.838 "data_size": 63488 00:10:56.838 }, 00:10:56.838 { 00:10:56.838 "name": "pt3", 00:10:56.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:56.838 "is_configured": true, 00:10:56.838 "data_offset": 2048, 00:10:56.838 "data_size": 63488 00:10:56.838 }, 00:10:56.838 { 00:10:56.838 "name": "pt4", 00:10:56.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:56.838 "is_configured": true, 00:10:56.838 "data_offset": 2048, 00:10:56.838 "data_size": 63488 00:10:56.838 } 00:10:56.838 ] 00:10:56.838 } 00:10:56.838 } 00:10:56.838 }' 00:10:56.838 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:56.838 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:56.838 pt2 00:10:56.838 pt3 00:10:56.838 pt4' 00:10:56.838 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:57.098 [2024-12-12 05:49:04.566537] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=93292078-f20b-4b03-b730-e56c42165b8a 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 93292078-f20b-4b03-b730-e56c42165b8a ']' 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.098 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.098 [2024-12-12 05:49:04.614126] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.098 [2024-12-12 05:49:04.614150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:57.098 [2024-12-12 05:49:04.614226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:57.098 [2024-12-12 05:49:04.614292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:57.098 [2024-12-12 05:49:04.614306] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 [2024-12-12 05:49:04.777879] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.359 [2024-12-12 05:49:04.779793] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.359 [2024-12-12 05:49:04.779839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.359 [2024-12-12 05:49:04.779872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:10:57.359 [2024-12-12 05:49:04.779920] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:57.359 [2024-12-12 05:49:04.779969] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:57.359 [2024-12-12 05:49:04.779992] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:57.359 [2024-12-12 05:49:04.780017] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:10:57.359 [2024-12-12 05:49:04.780032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.359 [2024-12-12 05:49:04.780044] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:57.359 request: 00:10:57.359 { 00:10:57.359 "name": "raid_bdev1", 00:10:57.359 "raid_level": "raid0", 00:10:57.359 "base_bdevs": [ 00:10:57.359 "malloc1", 00:10:57.359 "malloc2", 00:10:57.359 "malloc3", 00:10:57.359 "malloc4" 00:10:57.359 ], 00:10:57.359 "strip_size_kb": 64, 00:10:57.359 "superblock": false, 00:10:57.359 "method": "bdev_raid_create", 00:10:57.359 "req_id": 1 00:10:57.359 } 00:10:57.359 Got JSON-RPC error response 00:10:57.359 response: 00:10:57.359 { 00:10:57.359 "code": -17, 00:10:57.359 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.359 } 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 [2024-12-12 05:49:04.841732] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.359 [2024-12-12 05:49:04.841826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.359 [2024-12-12 05:49:04.841861] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:57.359 [2024-12-12 05:49:04.841894] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.359 [2024-12-12 05:49:04.844019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.359 [2024-12-12 05:49:04.844094] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.359 [2024-12-12 05:49:04.844191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:57.359 [2024-12-12 05:49:04.844275] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.359 pt1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.359 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.619 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.619 "name": "raid_bdev1", 00:10:57.619 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:57.619 "strip_size_kb": 64, 00:10:57.619 "state": "configuring", 00:10:57.619 "raid_level": "raid0", 00:10:57.619 "superblock": true, 00:10:57.619 "num_base_bdevs": 4, 00:10:57.619 "num_base_bdevs_discovered": 1, 00:10:57.619 "num_base_bdevs_operational": 4, 00:10:57.619 "base_bdevs_list": [ 00:10:57.619 { 00:10:57.619 "name": "pt1", 00:10:57.619 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.619 "is_configured": true, 00:10:57.619 "data_offset": 2048, 00:10:57.619 "data_size": 63488 00:10:57.619 }, 00:10:57.619 { 00:10:57.619 "name": null, 00:10:57.619 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.619 "is_configured": false, 00:10:57.619 "data_offset": 2048, 00:10:57.619 "data_size": 63488 00:10:57.619 }, 00:10:57.619 { 00:10:57.619 "name": null, 00:10:57.619 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.619 "is_configured": false, 00:10:57.619 "data_offset": 2048, 00:10:57.619 "data_size": 63488 00:10:57.619 }, 00:10:57.619 { 00:10:57.619 "name": null, 00:10:57.619 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.619 "is_configured": false, 00:10:57.619 "data_offset": 2048, 00:10:57.619 "data_size": 63488 00:10:57.619 } 00:10:57.619 ] 00:10:57.619 }' 00:10:57.619 05:49:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.619 05:49:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.878 [2024-12-12 05:49:05.233132] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:57.878 [2024-12-12 05:49:05.233205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.878 [2024-12-12 05:49:05.233227] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:10:57.878 [2024-12-12 05:49:05.233238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.878 [2024-12-12 05:49:05.233676] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.878 [2024-12-12 05:49:05.233697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:57.878 [2024-12-12 05:49:05.233779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:57.878 [2024-12-12 05:49:05.233802] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:57.878 pt2 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.878 [2024-12-12 05:49:05.245120] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:57.878 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.879 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:57.879 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:57.879 "name": "raid_bdev1", 00:10:57.879 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:57.879 "strip_size_kb": 64, 00:10:57.879 "state": "configuring", 00:10:57.879 "raid_level": "raid0", 00:10:57.879 "superblock": true, 00:10:57.879 "num_base_bdevs": 4, 00:10:57.879 "num_base_bdevs_discovered": 1, 00:10:57.879 "num_base_bdevs_operational": 4, 00:10:57.879 "base_bdevs_list": [ 00:10:57.879 { 00:10:57.879 "name": "pt1", 00:10:57.879 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:57.879 "is_configured": true, 00:10:57.879 "data_offset": 2048, 00:10:57.879 "data_size": 63488 00:10:57.879 }, 00:10:57.879 { 00:10:57.879 "name": null, 00:10:57.879 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:57.879 "is_configured": false, 00:10:57.879 "data_offset": 0, 00:10:57.879 "data_size": 63488 00:10:57.879 }, 00:10:57.879 { 00:10:57.879 "name": null, 00:10:57.879 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:57.879 "is_configured": false, 00:10:57.879 "data_offset": 2048, 00:10:57.879 "data_size": 63488 00:10:57.879 }, 00:10:57.879 { 00:10:57.879 "name": null, 00:10:57.879 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:57.879 "is_configured": false, 00:10:57.879 "data_offset": 2048, 00:10:57.879 "data_size": 63488 00:10:57.879 } 00:10:57.879 ] 00:10:57.879 }' 00:10:57.879 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:57.879 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.138 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:58.138 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.138 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.138 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.138 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.399 [2024-12-12 05:49:05.660395] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.399 [2024-12-12 05:49:05.660513] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.399 [2024-12-12 05:49:05.660555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:58.399 [2024-12-12 05:49:05.660582] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.399 [2024-12-12 05:49:05.661096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.399 [2024-12-12 05:49:05.661161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.399 [2024-12-12 05:49:05.661292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:58.399 [2024-12-12 05:49:05.661343] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.399 pt2 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.399 [2024-12-12 05:49:05.672343] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.399 [2024-12-12 05:49:05.672423] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.399 [2024-12-12 05:49:05.672472] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:58.399 [2024-12-12 05:49:05.672481] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.399 [2024-12-12 05:49:05.672863] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.399 [2024-12-12 05:49:05.672881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.399 [2024-12-12 05:49:05.672939] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:58.399 [2024-12-12 05:49:05.672962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.399 pt3 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.399 [2024-12-12 05:49:05.684308] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:10:58.399 [2024-12-12 05:49:05.684352] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.399 [2024-12-12 05:49:05.684383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:10:58.399 [2024-12-12 05:49:05.684391] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.399 [2024-12-12 05:49:05.684747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.399 [2024-12-12 05:49:05.684763] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:10:58.399 [2024-12-12 05:49:05.684818] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:10:58.399 [2024-12-12 05:49:05.684837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:10:58.399 [2024-12-12 05:49:05.684953] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:58.399 [2024-12-12 05:49:05.684961] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:10:58.399 [2024-12-12 05:49:05.685257] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.399 [2024-12-12 05:49:05.685428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:58.399 [2024-12-12 05:49:05.685448] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:58.399 [2024-12-12 05:49:05.685654] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.399 pt4 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.399 "name": "raid_bdev1", 00:10:58.399 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:58.399 "strip_size_kb": 64, 00:10:58.399 "state": "online", 00:10:58.399 "raid_level": "raid0", 00:10:58.399 "superblock": true, 00:10:58.399 "num_base_bdevs": 4, 00:10:58.399 "num_base_bdevs_discovered": 4, 00:10:58.399 "num_base_bdevs_operational": 4, 00:10:58.399 "base_bdevs_list": [ 00:10:58.399 { 00:10:58.399 "name": "pt1", 00:10:58.399 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.399 "is_configured": true, 00:10:58.399 "data_offset": 2048, 00:10:58.399 "data_size": 63488 00:10:58.399 }, 00:10:58.399 { 00:10:58.399 "name": "pt2", 00:10:58.399 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.399 "is_configured": true, 00:10:58.399 "data_offset": 2048, 00:10:58.399 "data_size": 63488 00:10:58.399 }, 00:10:58.399 { 00:10:58.399 "name": "pt3", 00:10:58.399 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.399 "is_configured": true, 00:10:58.399 "data_offset": 2048, 00:10:58.399 "data_size": 63488 00:10:58.399 }, 00:10:58.399 { 00:10:58.399 "name": "pt4", 00:10:58.399 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.399 "is_configured": true, 00:10:58.399 "data_offset": 2048, 00:10:58.399 "data_size": 63488 00:10:58.399 } 00:10:58.399 ] 00:10:58.399 }' 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.399 05:49:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.659 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:58.659 [2024-12-12 05:49:06.107935] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:58.660 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.660 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:58.660 "name": "raid_bdev1", 00:10:58.660 "aliases": [ 00:10:58.660 "93292078-f20b-4b03-b730-e56c42165b8a" 00:10:58.660 ], 00:10:58.660 "product_name": "Raid Volume", 00:10:58.660 "block_size": 512, 00:10:58.660 "num_blocks": 253952, 00:10:58.660 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:58.660 "assigned_rate_limits": { 00:10:58.660 "rw_ios_per_sec": 0, 00:10:58.660 "rw_mbytes_per_sec": 0, 00:10:58.660 "r_mbytes_per_sec": 0, 00:10:58.660 "w_mbytes_per_sec": 0 00:10:58.660 }, 00:10:58.660 "claimed": false, 00:10:58.660 "zoned": false, 00:10:58.660 "supported_io_types": { 00:10:58.660 "read": true, 00:10:58.660 "write": true, 00:10:58.660 "unmap": true, 00:10:58.660 "flush": true, 00:10:58.660 "reset": true, 00:10:58.660 "nvme_admin": false, 00:10:58.660 "nvme_io": false, 00:10:58.660 "nvme_io_md": false, 00:10:58.660 "write_zeroes": true, 00:10:58.660 "zcopy": false, 00:10:58.660 "get_zone_info": false, 00:10:58.660 "zone_management": false, 00:10:58.660 "zone_append": false, 00:10:58.660 "compare": false, 00:10:58.660 "compare_and_write": false, 00:10:58.660 "abort": false, 00:10:58.660 "seek_hole": false, 00:10:58.660 "seek_data": false, 00:10:58.660 "copy": false, 00:10:58.660 "nvme_iov_md": false 00:10:58.660 }, 00:10:58.660 "memory_domains": [ 00:10:58.660 { 00:10:58.660 "dma_device_id": "system", 00:10:58.660 "dma_device_type": 1 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.660 "dma_device_type": 2 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "system", 00:10:58.660 "dma_device_type": 1 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.660 "dma_device_type": 2 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "system", 00:10:58.660 "dma_device_type": 1 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.660 "dma_device_type": 2 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "system", 00:10:58.660 "dma_device_type": 1 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.660 "dma_device_type": 2 00:10:58.660 } 00:10:58.660 ], 00:10:58.660 "driver_specific": { 00:10:58.660 "raid": { 00:10:58.660 "uuid": "93292078-f20b-4b03-b730-e56c42165b8a", 00:10:58.660 "strip_size_kb": 64, 00:10:58.660 "state": "online", 00:10:58.660 "raid_level": "raid0", 00:10:58.660 "superblock": true, 00:10:58.660 "num_base_bdevs": 4, 00:10:58.660 "num_base_bdevs_discovered": 4, 00:10:58.660 "num_base_bdevs_operational": 4, 00:10:58.660 "base_bdevs_list": [ 00:10:58.660 { 00:10:58.660 "name": "pt1", 00:10:58.660 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.660 "is_configured": true, 00:10:58.660 "data_offset": 2048, 00:10:58.660 "data_size": 63488 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "name": "pt2", 00:10:58.660 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.660 "is_configured": true, 00:10:58.660 "data_offset": 2048, 00:10:58.660 "data_size": 63488 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "name": "pt3", 00:10:58.660 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.660 "is_configured": true, 00:10:58.660 "data_offset": 2048, 00:10:58.660 "data_size": 63488 00:10:58.660 }, 00:10:58.660 { 00:10:58.660 "name": "pt4", 00:10:58.660 "uuid": "00000000-0000-0000-0000-000000000004", 00:10:58.660 "is_configured": true, 00:10:58.660 "data_offset": 2048, 00:10:58.660 "data_size": 63488 00:10:58.660 } 00:10:58.660 ] 00:10:58.660 } 00:10:58.660 } 00:10:58.660 }' 00:10:58.660 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:58.920 pt2 00:10:58.920 pt3 00:10:58.920 pt4' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.920 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.180 [2024-12-12 05:49:06.455268] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 93292078-f20b-4b03-b730-e56c42165b8a '!=' 93292078-f20b-4b03-b730-e56c42165b8a ']' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 71637 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 71637 ']' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 71637 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71637 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:59.180 killing process with pid 71637 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71637' 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 71637 00:10:59.180 05:49:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 71637 00:10:59.180 [2024-12-12 05:49:06.535728] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:59.180 [2024-12-12 05:49:06.535821] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.180 [2024-12-12 05:49:06.535907] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.180 [2024-12-12 05:49:06.535917] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:59.440 [2024-12-12 05:49:06.915092] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.821 05:49:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:00.821 00:11:00.821 real 0m5.328s 00:11:00.821 user 0m7.617s 00:11:00.821 sys 0m0.882s 00:11:00.821 05:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:00.821 05:49:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.821 ************************************ 00:11:00.821 END TEST raid_superblock_test 00:11:00.821 ************************************ 00:11:00.821 05:49:08 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:00.821 05:49:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:00.821 05:49:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:00.821 05:49:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.821 ************************************ 00:11:00.821 START TEST raid_read_error_test 00:11:00.821 ************************************ 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.FeFBmN1TIk 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71896 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71896 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 71896 ']' 00:11:00.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:00.821 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.821 [2024-12-12 05:49:08.168752] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:00.821 [2024-12-12 05:49:08.168943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71896 ] 00:11:00.821 [2024-12-12 05:49:08.341174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.083 [2024-12-12 05:49:08.447975] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.342 [2024-12-12 05:49:08.636893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.342 [2024-12-12 05:49:08.636949] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 BaseBdev1_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 true 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 [2024-12-12 05:49:09.045622] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:01.602 [2024-12-12 05:49:09.045736] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.602 [2024-12-12 05:49:09.045772] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:01.602 [2024-12-12 05:49:09.045800] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.602 [2024-12-12 05:49:09.047822] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.602 [2024-12-12 05:49:09.047896] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:01.602 BaseBdev1 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 BaseBdev2_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 true 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.602 [2024-12-12 05:49:09.111919] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:01.602 [2024-12-12 05:49:09.112017] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.602 [2024-12-12 05:49:09.112051] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:01.602 [2024-12-12 05:49:09.112084] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.602 [2024-12-12 05:49:09.114197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.602 [2024-12-12 05:49:09.114268] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:01.602 BaseBdev2 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.602 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 BaseBdev3_malloc 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 true 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 [2024-12-12 05:49:09.189564] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:01.862 [2024-12-12 05:49:09.189657] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.862 [2024-12-12 05:49:09.189695] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:01.862 [2024-12-12 05:49:09.189728] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.862 [2024-12-12 05:49:09.191869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.862 [2024-12-12 05:49:09.191959] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:01.862 BaseBdev3 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 BaseBdev4_malloc 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 true 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.862 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.862 [2024-12-12 05:49:09.255068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:01.862 [2024-12-12 05:49:09.255185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:01.862 [2024-12-12 05:49:09.255219] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:01.862 [2024-12-12 05:49:09.255248] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:01.862 [2024-12-12 05:49:09.257377] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:01.863 [2024-12-12 05:49:09.257446] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:01.863 BaseBdev4 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.863 [2024-12-12 05:49:09.267109] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.863 [2024-12-12 05:49:09.268889] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:01.863 [2024-12-12 05:49:09.269016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:01.863 [2024-12-12 05:49:09.269083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:01.863 [2024-12-12 05:49:09.269305] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:01.863 [2024-12-12 05:49:09.269324] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:01.863 [2024-12-12 05:49:09.269576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:01.863 [2024-12-12 05:49:09.269738] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:01.863 [2024-12-12 05:49:09.269749] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:01.863 [2024-12-12 05:49:09.269927] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.863 "name": "raid_bdev1", 00:11:01.863 "uuid": "9a1c40bd-2f36-450c-9a20-517edfe16c1d", 00:11:01.863 "strip_size_kb": 64, 00:11:01.863 "state": "online", 00:11:01.863 "raid_level": "raid0", 00:11:01.863 "superblock": true, 00:11:01.863 "num_base_bdevs": 4, 00:11:01.863 "num_base_bdevs_discovered": 4, 00:11:01.863 "num_base_bdevs_operational": 4, 00:11:01.863 "base_bdevs_list": [ 00:11:01.863 { 00:11:01.863 "name": "BaseBdev1", 00:11:01.863 "uuid": "9fc642b9-208d-5132-bb1f-d1c3f127e15b", 00:11:01.863 "is_configured": true, 00:11:01.863 "data_offset": 2048, 00:11:01.863 "data_size": 63488 00:11:01.863 }, 00:11:01.863 { 00:11:01.863 "name": "BaseBdev2", 00:11:01.863 "uuid": "12e24461-62ed-5366-b962-37dd461ce51a", 00:11:01.863 "is_configured": true, 00:11:01.863 "data_offset": 2048, 00:11:01.863 "data_size": 63488 00:11:01.863 }, 00:11:01.863 { 00:11:01.863 "name": "BaseBdev3", 00:11:01.863 "uuid": "b2a304b0-92fe-5169-97a4-bac693a5f868", 00:11:01.863 "is_configured": true, 00:11:01.863 "data_offset": 2048, 00:11:01.863 "data_size": 63488 00:11:01.863 }, 00:11:01.863 { 00:11:01.863 "name": "BaseBdev4", 00:11:01.863 "uuid": "e4236fd7-4cca-51d3-98db-22dab35925a2", 00:11:01.863 "is_configured": true, 00:11:01.863 "data_offset": 2048, 00:11:01.863 "data_size": 63488 00:11:01.863 } 00:11:01.863 ] 00:11:01.863 }' 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.863 05:49:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.432 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:02.432 05:49:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:02.432 [2024-12-12 05:49:09.795495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.372 "name": "raid_bdev1", 00:11:03.372 "uuid": "9a1c40bd-2f36-450c-9a20-517edfe16c1d", 00:11:03.372 "strip_size_kb": 64, 00:11:03.372 "state": "online", 00:11:03.372 "raid_level": "raid0", 00:11:03.372 "superblock": true, 00:11:03.372 "num_base_bdevs": 4, 00:11:03.372 "num_base_bdevs_discovered": 4, 00:11:03.372 "num_base_bdevs_operational": 4, 00:11:03.372 "base_bdevs_list": [ 00:11:03.372 { 00:11:03.372 "name": "BaseBdev1", 00:11:03.372 "uuid": "9fc642b9-208d-5132-bb1f-d1c3f127e15b", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "BaseBdev2", 00:11:03.372 "uuid": "12e24461-62ed-5366-b962-37dd461ce51a", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "BaseBdev3", 00:11:03.372 "uuid": "b2a304b0-92fe-5169-97a4-bac693a5f868", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 }, 00:11:03.372 { 00:11:03.372 "name": "BaseBdev4", 00:11:03.372 "uuid": "e4236fd7-4cca-51d3-98db-22dab35925a2", 00:11:03.372 "is_configured": true, 00:11:03.372 "data_offset": 2048, 00:11:03.372 "data_size": 63488 00:11:03.372 } 00:11:03.372 ] 00:11:03.372 }' 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.372 05:49:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.632 [2024-12-12 05:49:11.083237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:03.632 [2024-12-12 05:49:11.083352] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:03.632 [2024-12-12 05:49:11.086125] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:03.632 [2024-12-12 05:49:11.086223] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.632 [2024-12-12 05:49:11.086285] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:03.632 "results": [ 00:11:03.632 { 00:11:03.632 "job": "raid_bdev1", 00:11:03.632 "core_mask": "0x1", 00:11:03.632 "workload": "randrw", 00:11:03.632 "percentage": 50, 00:11:03.632 "status": "finished", 00:11:03.632 "queue_depth": 1, 00:11:03.632 "io_size": 131072, 00:11:03.632 "runtime": 1.288501, 00:11:03.632 "iops": 15814.500726037466, 00:11:03.632 "mibps": 1976.8125907546832, 00:11:03.632 "io_failed": 1, 00:11:03.632 "io_timeout": 0, 00:11:03.632 "avg_latency_us": 87.64085937356023, 00:11:03.632 "min_latency_us": 26.606113537117903, 00:11:03.632 "max_latency_us": 1366.5257641921398 00:11:03.632 } 00:11:03.632 ], 00:11:03.632 "core_count": 1 00:11:03.632 } 00:11:03.632 ee all in destruct 00:11:03.632 [2024-12-12 05:49:11.086330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71896 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 71896 ']' 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 71896 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71896 00:11:03.632 killing process with pid 71896 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71896' 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 71896 00:11:03.632 [2024-12-12 05:49:11.128357] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:03.632 05:49:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 71896 00:11:04.201 [2024-12-12 05:49:11.441221] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.FeFBmN1TIk 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.78 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.78 != \0\.\0\0 ]] 00:11:05.140 00:11:05.140 real 0m4.522s 00:11:05.140 user 0m5.271s 00:11:05.140 sys 0m0.565s 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:05.140 05:49:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.140 ************************************ 00:11:05.140 END TEST raid_read_error_test 00:11:05.140 ************************************ 00:11:05.140 05:49:12 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:05.140 05:49:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:05.140 05:49:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:05.140 05:49:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:05.140 ************************************ 00:11:05.140 START TEST raid_write_error_test 00:11:05.140 ************************************ 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:05.140 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:05.141 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:05.141 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.kwzBygDdd2 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72042 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72042 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 72042 ']' 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:05.401 05:49:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.401 [2024-12-12 05:49:12.752429] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:05.401 [2024-12-12 05:49:12.752652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72042 ] 00:11:05.660 [2024-12-12 05:49:12.927054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.660 [2024-12-12 05:49:13.034525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.919 [2024-12-12 05:49:13.223955] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:05.919 [2024-12-12 05:49:13.224098] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 BaseBdev1_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 true 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 [2024-12-12 05:49:13.614005] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:06.179 [2024-12-12 05:49:13.614065] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.179 [2024-12-12 05:49:13.614083] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:06.179 [2024-12-12 05:49:13.614094] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.179 [2024-12-12 05:49:13.616216] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.179 [2024-12-12 05:49:13.616258] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:06.179 BaseBdev1 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 BaseBdev2_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 true 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.179 [2024-12-12 05:49:13.681274] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:06.179 [2024-12-12 05:49:13.681388] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.179 [2024-12-12 05:49:13.681406] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:06.179 [2024-12-12 05:49:13.681417] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.179 [2024-12-12 05:49:13.683576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.179 [2024-12-12 05:49:13.683612] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:06.179 BaseBdev2 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.179 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.180 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:06.180 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.180 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 BaseBdev3_malloc 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 true 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 [2024-12-12 05:49:13.759815] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:06.440 [2024-12-12 05:49:13.759867] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.440 [2024-12-12 05:49:13.759898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:06.440 [2024-12-12 05:49:13.759908] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.440 [2024-12-12 05:49:13.762153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.440 [2024-12-12 05:49:13.762192] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:06.440 BaseBdev3 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 BaseBdev4_malloc 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 true 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 [2024-12-12 05:49:13.825982] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:06.440 [2024-12-12 05:49:13.826034] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.440 [2024-12-12 05:49:13.826050] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:06.440 [2024-12-12 05:49:13.826059] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.440 [2024-12-12 05:49:13.828080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.440 [2024-12-12 05:49:13.828181] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:06.440 BaseBdev4 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.440 [2024-12-12 05:49:13.838020] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.440 [2024-12-12 05:49:13.839775] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:06.440 [2024-12-12 05:49:13.839913] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.440 [2024-12-12 05:49:13.839994] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:06.440 [2024-12-12 05:49:13.840262] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:06.440 [2024-12-12 05:49:13.840313] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:06.440 [2024-12-12 05:49:13.840602] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:06.440 [2024-12-12 05:49:13.840797] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:06.440 [2024-12-12 05:49:13.840839] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:06.440 [2024-12-12 05:49:13.841075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.440 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.441 "name": "raid_bdev1", 00:11:06.441 "uuid": "1ecc29b6-21a5-41fe-bfaf-6c1d00097a9d", 00:11:06.441 "strip_size_kb": 64, 00:11:06.441 "state": "online", 00:11:06.441 "raid_level": "raid0", 00:11:06.441 "superblock": true, 00:11:06.441 "num_base_bdevs": 4, 00:11:06.441 "num_base_bdevs_discovered": 4, 00:11:06.441 "num_base_bdevs_operational": 4, 00:11:06.441 "base_bdevs_list": [ 00:11:06.441 { 00:11:06.441 "name": "BaseBdev1", 00:11:06.441 "uuid": "a67b5aca-dcc9-59d7-85b6-81cf168382f2", 00:11:06.441 "is_configured": true, 00:11:06.441 "data_offset": 2048, 00:11:06.441 "data_size": 63488 00:11:06.441 }, 00:11:06.441 { 00:11:06.441 "name": "BaseBdev2", 00:11:06.441 "uuid": "ef872136-2912-5553-a6be-b649d8154d21", 00:11:06.441 "is_configured": true, 00:11:06.441 "data_offset": 2048, 00:11:06.441 "data_size": 63488 00:11:06.441 }, 00:11:06.441 { 00:11:06.441 "name": "BaseBdev3", 00:11:06.441 "uuid": "5cd08893-bec2-5838-8fb7-5a3672809b17", 00:11:06.441 "is_configured": true, 00:11:06.441 "data_offset": 2048, 00:11:06.441 "data_size": 63488 00:11:06.441 }, 00:11:06.441 { 00:11:06.441 "name": "BaseBdev4", 00:11:06.441 "uuid": "bbc8aa71-3473-50b3-8c69-a31070ace9ec", 00:11:06.441 "is_configured": true, 00:11:06.441 "data_offset": 2048, 00:11:06.441 "data_size": 63488 00:11:06.441 } 00:11:06.441 ] 00:11:06.441 }' 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.441 05:49:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.010 05:49:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:07.010 05:49:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:07.010 [2024-12-12 05:49:14.346536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:07.949 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.950 "name": "raid_bdev1", 00:11:07.950 "uuid": "1ecc29b6-21a5-41fe-bfaf-6c1d00097a9d", 00:11:07.950 "strip_size_kb": 64, 00:11:07.950 "state": "online", 00:11:07.950 "raid_level": "raid0", 00:11:07.950 "superblock": true, 00:11:07.950 "num_base_bdevs": 4, 00:11:07.950 "num_base_bdevs_discovered": 4, 00:11:07.950 "num_base_bdevs_operational": 4, 00:11:07.950 "base_bdevs_list": [ 00:11:07.950 { 00:11:07.950 "name": "BaseBdev1", 00:11:07.950 "uuid": "a67b5aca-dcc9-59d7-85b6-81cf168382f2", 00:11:07.950 "is_configured": true, 00:11:07.950 "data_offset": 2048, 00:11:07.950 "data_size": 63488 00:11:07.950 }, 00:11:07.950 { 00:11:07.950 "name": "BaseBdev2", 00:11:07.950 "uuid": "ef872136-2912-5553-a6be-b649d8154d21", 00:11:07.950 "is_configured": true, 00:11:07.950 "data_offset": 2048, 00:11:07.950 "data_size": 63488 00:11:07.950 }, 00:11:07.950 { 00:11:07.950 "name": "BaseBdev3", 00:11:07.950 "uuid": "5cd08893-bec2-5838-8fb7-5a3672809b17", 00:11:07.950 "is_configured": true, 00:11:07.950 "data_offset": 2048, 00:11:07.950 "data_size": 63488 00:11:07.950 }, 00:11:07.950 { 00:11:07.950 "name": "BaseBdev4", 00:11:07.950 "uuid": "bbc8aa71-3473-50b3-8c69-a31070ace9ec", 00:11:07.950 "is_configured": true, 00:11:07.950 "data_offset": 2048, 00:11:07.950 "data_size": 63488 00:11:07.950 } 00:11:07.950 ] 00:11:07.950 }' 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.950 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.209 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:08.209 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.209 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.209 [2024-12-12 05:49:15.726854] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:08.210 [2024-12-12 05:49:15.726970] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.210 [2024-12-12 05:49:15.729930] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.210 [2024-12-12 05:49:15.730045] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.210 [2024-12-12 05:49:15.730136] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.210 [2024-12-12 05:49:15.730212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:08.470 { 00:11:08.470 "results": [ 00:11:08.470 { 00:11:08.470 "job": "raid_bdev1", 00:11:08.470 "core_mask": "0x1", 00:11:08.470 "workload": "randrw", 00:11:08.470 "percentage": 50, 00:11:08.470 "status": "finished", 00:11:08.470 "queue_depth": 1, 00:11:08.470 "io_size": 131072, 00:11:08.470 "runtime": 1.381432, 00:11:08.470 "iops": 15853.11473890861, 00:11:08.470 "mibps": 1981.6393423635764, 00:11:08.470 "io_failed": 1, 00:11:08.470 "io_timeout": 0, 00:11:08.470 "avg_latency_us": 87.49952890428524, 00:11:08.470 "min_latency_us": 25.152838427947597, 00:11:08.470 "max_latency_us": 1359.3711790393013 00:11:08.470 } 00:11:08.470 ], 00:11:08.470 "core_count": 1 00:11:08.470 } 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72042 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 72042 ']' 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 72042 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72042 00:11:08.470 killing process with pid 72042 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72042' 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 72042 00:11:08.470 [2024-12-12 05:49:15.759782] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.470 05:49:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 72042 00:11:08.729 [2024-12-12 05:49:16.070466] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.669 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.kwzBygDdd2 00:11:09.669 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:09.669 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:11:09.930 00:11:09.930 real 0m4.549s 00:11:09.930 user 0m5.334s 00:11:09.930 sys 0m0.526s 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.930 ************************************ 00:11:09.930 END TEST raid_write_error_test 00:11:09.930 ************************************ 00:11:09.930 05:49:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 05:49:17 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:09.930 05:49:17 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:09.930 05:49:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:09.930 05:49:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.930 05:49:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 ************************************ 00:11:09.930 START TEST raid_state_function_test 00:11:09.930 ************************************ 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:09.930 Process raid pid: 72180 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=72180 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72180' 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 72180 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 72180 ']' 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.930 05:49:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.930 [2024-12-12 05:49:17.367361] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:09.930 [2024-12-12 05:49:17.367495] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.191 [2024-12-12 05:49:17.540094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.191 [2024-12-12 05:49:17.644023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.450 [2024-12-12 05:49:17.839270] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.450 [2024-12-12 05:49:17.839308] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 [2024-12-12 05:49:18.174958] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.711 [2024-12-12 05:49:18.175061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.711 [2024-12-12 05:49:18.175092] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.711 [2024-12-12 05:49:18.175115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.711 [2024-12-12 05:49:18.175133] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.711 [2024-12-12 05:49:18.175153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.711 [2024-12-12 05:49:18.175170] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:10.711 [2024-12-12 05:49:18.175213] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.711 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.971 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.971 "name": "Existed_Raid", 00:11:10.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.971 "strip_size_kb": 64, 00:11:10.971 "state": "configuring", 00:11:10.971 "raid_level": "concat", 00:11:10.971 "superblock": false, 00:11:10.971 "num_base_bdevs": 4, 00:11:10.971 "num_base_bdevs_discovered": 0, 00:11:10.971 "num_base_bdevs_operational": 4, 00:11:10.971 "base_bdevs_list": [ 00:11:10.971 { 00:11:10.971 "name": "BaseBdev1", 00:11:10.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.971 "is_configured": false, 00:11:10.971 "data_offset": 0, 00:11:10.971 "data_size": 0 00:11:10.971 }, 00:11:10.971 { 00:11:10.971 "name": "BaseBdev2", 00:11:10.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.971 "is_configured": false, 00:11:10.971 "data_offset": 0, 00:11:10.971 "data_size": 0 00:11:10.971 }, 00:11:10.971 { 00:11:10.971 "name": "BaseBdev3", 00:11:10.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.971 "is_configured": false, 00:11:10.971 "data_offset": 0, 00:11:10.971 "data_size": 0 00:11:10.971 }, 00:11:10.971 { 00:11:10.971 "name": "BaseBdev4", 00:11:10.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.971 "is_configured": false, 00:11:10.971 "data_offset": 0, 00:11:10.971 "data_size": 0 00:11:10.971 } 00:11:10.971 ] 00:11:10.971 }' 00:11:10.971 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.971 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 [2024-12-12 05:49:18.650086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.231 [2024-12-12 05:49:18.650162] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 [2024-12-12 05:49:18.662067] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.231 [2024-12-12 05:49:18.662111] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.231 [2024-12-12 05:49:18.662119] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.231 [2024-12-12 05:49:18.662144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.231 [2024-12-12 05:49:18.662150] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.231 [2024-12-12 05:49:18.662158] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.231 [2024-12-12 05:49:18.662164] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.231 [2024-12-12 05:49:18.662172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.231 [2024-12-12 05:49:18.708105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.231 BaseBdev1 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:11.231 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.232 [ 00:11:11.232 { 00:11:11.232 "name": "BaseBdev1", 00:11:11.232 "aliases": [ 00:11:11.232 "0285be1d-0007-40d4-b346-8105e8b9e8ca" 00:11:11.232 ], 00:11:11.232 "product_name": "Malloc disk", 00:11:11.232 "block_size": 512, 00:11:11.232 "num_blocks": 65536, 00:11:11.232 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:11.232 "assigned_rate_limits": { 00:11:11.232 "rw_ios_per_sec": 0, 00:11:11.232 "rw_mbytes_per_sec": 0, 00:11:11.232 "r_mbytes_per_sec": 0, 00:11:11.232 "w_mbytes_per_sec": 0 00:11:11.232 }, 00:11:11.232 "claimed": true, 00:11:11.232 "claim_type": "exclusive_write", 00:11:11.232 "zoned": false, 00:11:11.232 "supported_io_types": { 00:11:11.232 "read": true, 00:11:11.232 "write": true, 00:11:11.232 "unmap": true, 00:11:11.232 "flush": true, 00:11:11.232 "reset": true, 00:11:11.232 "nvme_admin": false, 00:11:11.232 "nvme_io": false, 00:11:11.232 "nvme_io_md": false, 00:11:11.232 "write_zeroes": true, 00:11:11.232 "zcopy": true, 00:11:11.232 "get_zone_info": false, 00:11:11.232 "zone_management": false, 00:11:11.232 "zone_append": false, 00:11:11.232 "compare": false, 00:11:11.232 "compare_and_write": false, 00:11:11.232 "abort": true, 00:11:11.232 "seek_hole": false, 00:11:11.232 "seek_data": false, 00:11:11.232 "copy": true, 00:11:11.232 "nvme_iov_md": false 00:11:11.232 }, 00:11:11.232 "memory_domains": [ 00:11:11.232 { 00:11:11.232 "dma_device_id": "system", 00:11:11.232 "dma_device_type": 1 00:11:11.232 }, 00:11:11.232 { 00:11:11.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:11.232 "dma_device_type": 2 00:11:11.232 } 00:11:11.232 ], 00:11:11.232 "driver_specific": {} 00:11:11.232 } 00:11:11.232 ] 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.232 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.492 "name": "Existed_Raid", 00:11:11.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.492 "strip_size_kb": 64, 00:11:11.492 "state": "configuring", 00:11:11.492 "raid_level": "concat", 00:11:11.492 "superblock": false, 00:11:11.492 "num_base_bdevs": 4, 00:11:11.492 "num_base_bdevs_discovered": 1, 00:11:11.492 "num_base_bdevs_operational": 4, 00:11:11.492 "base_bdevs_list": [ 00:11:11.492 { 00:11:11.492 "name": "BaseBdev1", 00:11:11.492 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:11.492 "is_configured": true, 00:11:11.492 "data_offset": 0, 00:11:11.492 "data_size": 65536 00:11:11.492 }, 00:11:11.492 { 00:11:11.492 "name": "BaseBdev2", 00:11:11.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.492 "is_configured": false, 00:11:11.492 "data_offset": 0, 00:11:11.492 "data_size": 0 00:11:11.492 }, 00:11:11.492 { 00:11:11.492 "name": "BaseBdev3", 00:11:11.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.492 "is_configured": false, 00:11:11.492 "data_offset": 0, 00:11:11.492 "data_size": 0 00:11:11.492 }, 00:11:11.492 { 00:11:11.492 "name": "BaseBdev4", 00:11:11.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.492 "is_configured": false, 00:11:11.492 "data_offset": 0, 00:11:11.492 "data_size": 0 00:11:11.492 } 00:11:11.492 ] 00:11:11.492 }' 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.492 05:49:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.753 [2024-12-12 05:49:19.187328] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.753 [2024-12-12 05:49:19.187425] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.753 [2024-12-12 05:49:19.199355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.753 [2024-12-12 05:49:19.201212] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.753 [2024-12-12 05:49:19.201287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.753 [2024-12-12 05:49:19.201314] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.753 [2024-12-12 05:49:19.201338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.753 [2024-12-12 05:49:19.201356] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:11.753 [2024-12-12 05:49:19.201376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:11.753 "name": "Existed_Raid", 00:11:11.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.753 "strip_size_kb": 64, 00:11:11.753 "state": "configuring", 00:11:11.753 "raid_level": "concat", 00:11:11.753 "superblock": false, 00:11:11.753 "num_base_bdevs": 4, 00:11:11.753 "num_base_bdevs_discovered": 1, 00:11:11.753 "num_base_bdevs_operational": 4, 00:11:11.753 "base_bdevs_list": [ 00:11:11.753 { 00:11:11.753 "name": "BaseBdev1", 00:11:11.753 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:11.753 "is_configured": true, 00:11:11.753 "data_offset": 0, 00:11:11.753 "data_size": 65536 00:11:11.753 }, 00:11:11.753 { 00:11:11.753 "name": "BaseBdev2", 00:11:11.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.753 "is_configured": false, 00:11:11.753 "data_offset": 0, 00:11:11.753 "data_size": 0 00:11:11.753 }, 00:11:11.753 { 00:11:11.753 "name": "BaseBdev3", 00:11:11.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.753 "is_configured": false, 00:11:11.753 "data_offset": 0, 00:11:11.753 "data_size": 0 00:11:11.753 }, 00:11:11.753 { 00:11:11.753 "name": "BaseBdev4", 00:11:11.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:11.753 "is_configured": false, 00:11:11.753 "data_offset": 0, 00:11:11.753 "data_size": 0 00:11:11.753 } 00:11:11.753 ] 00:11:11.753 }' 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:11.753 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 [2024-12-12 05:49:19.684064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.323 BaseBdev2 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.323 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.323 [ 00:11:12.323 { 00:11:12.323 "name": "BaseBdev2", 00:11:12.323 "aliases": [ 00:11:12.323 "56f51283-b07a-42eb-9af2-1619eef8f2fa" 00:11:12.323 ], 00:11:12.323 "product_name": "Malloc disk", 00:11:12.323 "block_size": 512, 00:11:12.323 "num_blocks": 65536, 00:11:12.323 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:12.323 "assigned_rate_limits": { 00:11:12.323 "rw_ios_per_sec": 0, 00:11:12.323 "rw_mbytes_per_sec": 0, 00:11:12.323 "r_mbytes_per_sec": 0, 00:11:12.323 "w_mbytes_per_sec": 0 00:11:12.323 }, 00:11:12.323 "claimed": true, 00:11:12.323 "claim_type": "exclusive_write", 00:11:12.323 "zoned": false, 00:11:12.324 "supported_io_types": { 00:11:12.324 "read": true, 00:11:12.324 "write": true, 00:11:12.324 "unmap": true, 00:11:12.324 "flush": true, 00:11:12.324 "reset": true, 00:11:12.324 "nvme_admin": false, 00:11:12.324 "nvme_io": false, 00:11:12.324 "nvme_io_md": false, 00:11:12.324 "write_zeroes": true, 00:11:12.324 "zcopy": true, 00:11:12.324 "get_zone_info": false, 00:11:12.324 "zone_management": false, 00:11:12.324 "zone_append": false, 00:11:12.324 "compare": false, 00:11:12.324 "compare_and_write": false, 00:11:12.324 "abort": true, 00:11:12.324 "seek_hole": false, 00:11:12.324 "seek_data": false, 00:11:12.324 "copy": true, 00:11:12.324 "nvme_iov_md": false 00:11:12.324 }, 00:11:12.324 "memory_domains": [ 00:11:12.324 { 00:11:12.324 "dma_device_id": "system", 00:11:12.324 "dma_device_type": 1 00:11:12.324 }, 00:11:12.324 { 00:11:12.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.324 "dma_device_type": 2 00:11:12.324 } 00:11:12.324 ], 00:11:12.324 "driver_specific": {} 00:11:12.324 } 00:11:12.324 ] 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.324 "name": "Existed_Raid", 00:11:12.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.324 "strip_size_kb": 64, 00:11:12.324 "state": "configuring", 00:11:12.324 "raid_level": "concat", 00:11:12.324 "superblock": false, 00:11:12.324 "num_base_bdevs": 4, 00:11:12.324 "num_base_bdevs_discovered": 2, 00:11:12.324 "num_base_bdevs_operational": 4, 00:11:12.324 "base_bdevs_list": [ 00:11:12.324 { 00:11:12.324 "name": "BaseBdev1", 00:11:12.324 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:12.324 "is_configured": true, 00:11:12.324 "data_offset": 0, 00:11:12.324 "data_size": 65536 00:11:12.324 }, 00:11:12.324 { 00:11:12.324 "name": "BaseBdev2", 00:11:12.324 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:12.324 "is_configured": true, 00:11:12.324 "data_offset": 0, 00:11:12.324 "data_size": 65536 00:11:12.324 }, 00:11:12.324 { 00:11:12.324 "name": "BaseBdev3", 00:11:12.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.324 "is_configured": false, 00:11:12.324 "data_offset": 0, 00:11:12.324 "data_size": 0 00:11:12.324 }, 00:11:12.324 { 00:11:12.324 "name": "BaseBdev4", 00:11:12.324 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.324 "is_configured": false, 00:11:12.324 "data_offset": 0, 00:11:12.324 "data_size": 0 00:11:12.324 } 00:11:12.324 ] 00:11:12.324 }' 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.324 05:49:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.894 [2024-12-12 05:49:20.248873] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.894 BaseBdev3 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:12.894 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 [ 00:11:12.895 { 00:11:12.895 "name": "BaseBdev3", 00:11:12.895 "aliases": [ 00:11:12.895 "3b66dc1d-5c01-4513-8627-235428d0292e" 00:11:12.895 ], 00:11:12.895 "product_name": "Malloc disk", 00:11:12.895 "block_size": 512, 00:11:12.895 "num_blocks": 65536, 00:11:12.895 "uuid": "3b66dc1d-5c01-4513-8627-235428d0292e", 00:11:12.895 "assigned_rate_limits": { 00:11:12.895 "rw_ios_per_sec": 0, 00:11:12.895 "rw_mbytes_per_sec": 0, 00:11:12.895 "r_mbytes_per_sec": 0, 00:11:12.895 "w_mbytes_per_sec": 0 00:11:12.895 }, 00:11:12.895 "claimed": true, 00:11:12.895 "claim_type": "exclusive_write", 00:11:12.895 "zoned": false, 00:11:12.895 "supported_io_types": { 00:11:12.895 "read": true, 00:11:12.895 "write": true, 00:11:12.895 "unmap": true, 00:11:12.895 "flush": true, 00:11:12.895 "reset": true, 00:11:12.895 "nvme_admin": false, 00:11:12.895 "nvme_io": false, 00:11:12.895 "nvme_io_md": false, 00:11:12.895 "write_zeroes": true, 00:11:12.895 "zcopy": true, 00:11:12.895 "get_zone_info": false, 00:11:12.895 "zone_management": false, 00:11:12.895 "zone_append": false, 00:11:12.895 "compare": false, 00:11:12.895 "compare_and_write": false, 00:11:12.895 "abort": true, 00:11:12.895 "seek_hole": false, 00:11:12.895 "seek_data": false, 00:11:12.895 "copy": true, 00:11:12.895 "nvme_iov_md": false 00:11:12.895 }, 00:11:12.895 "memory_domains": [ 00:11:12.895 { 00:11:12.895 "dma_device_id": "system", 00:11:12.895 "dma_device_type": 1 00:11:12.895 }, 00:11:12.895 { 00:11:12.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.895 "dma_device_type": 2 00:11:12.895 } 00:11:12.895 ], 00:11:12.895 "driver_specific": {} 00:11:12.895 } 00:11:12.895 ] 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:12.895 "name": "Existed_Raid", 00:11:12.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.895 "strip_size_kb": 64, 00:11:12.895 "state": "configuring", 00:11:12.895 "raid_level": "concat", 00:11:12.895 "superblock": false, 00:11:12.895 "num_base_bdevs": 4, 00:11:12.895 "num_base_bdevs_discovered": 3, 00:11:12.895 "num_base_bdevs_operational": 4, 00:11:12.895 "base_bdevs_list": [ 00:11:12.895 { 00:11:12.895 "name": "BaseBdev1", 00:11:12.895 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:12.895 "is_configured": true, 00:11:12.895 "data_offset": 0, 00:11:12.895 "data_size": 65536 00:11:12.895 }, 00:11:12.895 { 00:11:12.895 "name": "BaseBdev2", 00:11:12.895 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:12.895 "is_configured": true, 00:11:12.895 "data_offset": 0, 00:11:12.895 "data_size": 65536 00:11:12.895 }, 00:11:12.895 { 00:11:12.895 "name": "BaseBdev3", 00:11:12.895 "uuid": "3b66dc1d-5c01-4513-8627-235428d0292e", 00:11:12.895 "is_configured": true, 00:11:12.895 "data_offset": 0, 00:11:12.895 "data_size": 65536 00:11:12.895 }, 00:11:12.895 { 00:11:12.895 "name": "BaseBdev4", 00:11:12.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.895 "is_configured": false, 00:11:12.895 "data_offset": 0, 00:11:12.895 "data_size": 0 00:11:12.895 } 00:11:12.895 ] 00:11:12.895 }' 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:12.895 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.155 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:13.155 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.155 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.415 [2024-12-12 05:49:20.697821] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:13.415 [2024-12-12 05:49:20.697917] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:13.415 [2024-12-12 05:49:20.697943] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:13.415 [2024-12-12 05:49:20.698260] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:13.415 [2024-12-12 05:49:20.698476] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:13.415 [2024-12-12 05:49:20.698540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:13.415 [2024-12-12 05:49:20.698867] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:13.415 BaseBdev4 00:11:13.415 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.415 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:13.415 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:13.415 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.416 [ 00:11:13.416 { 00:11:13.416 "name": "BaseBdev4", 00:11:13.416 "aliases": [ 00:11:13.416 "d92a60cf-2197-4508-ae86-6415a88931be" 00:11:13.416 ], 00:11:13.416 "product_name": "Malloc disk", 00:11:13.416 "block_size": 512, 00:11:13.416 "num_blocks": 65536, 00:11:13.416 "uuid": "d92a60cf-2197-4508-ae86-6415a88931be", 00:11:13.416 "assigned_rate_limits": { 00:11:13.416 "rw_ios_per_sec": 0, 00:11:13.416 "rw_mbytes_per_sec": 0, 00:11:13.416 "r_mbytes_per_sec": 0, 00:11:13.416 "w_mbytes_per_sec": 0 00:11:13.416 }, 00:11:13.416 "claimed": true, 00:11:13.416 "claim_type": "exclusive_write", 00:11:13.416 "zoned": false, 00:11:13.416 "supported_io_types": { 00:11:13.416 "read": true, 00:11:13.416 "write": true, 00:11:13.416 "unmap": true, 00:11:13.416 "flush": true, 00:11:13.416 "reset": true, 00:11:13.416 "nvme_admin": false, 00:11:13.416 "nvme_io": false, 00:11:13.416 "nvme_io_md": false, 00:11:13.416 "write_zeroes": true, 00:11:13.416 "zcopy": true, 00:11:13.416 "get_zone_info": false, 00:11:13.416 "zone_management": false, 00:11:13.416 "zone_append": false, 00:11:13.416 "compare": false, 00:11:13.416 "compare_and_write": false, 00:11:13.416 "abort": true, 00:11:13.416 "seek_hole": false, 00:11:13.416 "seek_data": false, 00:11:13.416 "copy": true, 00:11:13.416 "nvme_iov_md": false 00:11:13.416 }, 00:11:13.416 "memory_domains": [ 00:11:13.416 { 00:11:13.416 "dma_device_id": "system", 00:11:13.416 "dma_device_type": 1 00:11:13.416 }, 00:11:13.416 { 00:11:13.416 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.416 "dma_device_type": 2 00:11:13.416 } 00:11:13.416 ], 00:11:13.416 "driver_specific": {} 00:11:13.416 } 00:11:13.416 ] 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.416 "name": "Existed_Raid", 00:11:13.416 "uuid": "edcd3dbc-a203-4e37-aaea-8a13aa6299ab", 00:11:13.416 "strip_size_kb": 64, 00:11:13.416 "state": "online", 00:11:13.416 "raid_level": "concat", 00:11:13.416 "superblock": false, 00:11:13.416 "num_base_bdevs": 4, 00:11:13.416 "num_base_bdevs_discovered": 4, 00:11:13.416 "num_base_bdevs_operational": 4, 00:11:13.416 "base_bdevs_list": [ 00:11:13.416 { 00:11:13.416 "name": "BaseBdev1", 00:11:13.416 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:13.416 "is_configured": true, 00:11:13.416 "data_offset": 0, 00:11:13.416 "data_size": 65536 00:11:13.416 }, 00:11:13.416 { 00:11:13.416 "name": "BaseBdev2", 00:11:13.416 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:13.416 "is_configured": true, 00:11:13.416 "data_offset": 0, 00:11:13.416 "data_size": 65536 00:11:13.416 }, 00:11:13.416 { 00:11:13.416 "name": "BaseBdev3", 00:11:13.416 "uuid": "3b66dc1d-5c01-4513-8627-235428d0292e", 00:11:13.416 "is_configured": true, 00:11:13.416 "data_offset": 0, 00:11:13.416 "data_size": 65536 00:11:13.416 }, 00:11:13.416 { 00:11:13.416 "name": "BaseBdev4", 00:11:13.416 "uuid": "d92a60cf-2197-4508-ae86-6415a88931be", 00:11:13.416 "is_configured": true, 00:11:13.416 "data_offset": 0, 00:11:13.416 "data_size": 65536 00:11:13.416 } 00:11:13.416 ] 00:11:13.416 }' 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.416 05:49:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.676 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:13.676 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:13.676 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:13.676 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.677 [2024-12-12 05:49:21.145432] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:13.677 "name": "Existed_Raid", 00:11:13.677 "aliases": [ 00:11:13.677 "edcd3dbc-a203-4e37-aaea-8a13aa6299ab" 00:11:13.677 ], 00:11:13.677 "product_name": "Raid Volume", 00:11:13.677 "block_size": 512, 00:11:13.677 "num_blocks": 262144, 00:11:13.677 "uuid": "edcd3dbc-a203-4e37-aaea-8a13aa6299ab", 00:11:13.677 "assigned_rate_limits": { 00:11:13.677 "rw_ios_per_sec": 0, 00:11:13.677 "rw_mbytes_per_sec": 0, 00:11:13.677 "r_mbytes_per_sec": 0, 00:11:13.677 "w_mbytes_per_sec": 0 00:11:13.677 }, 00:11:13.677 "claimed": false, 00:11:13.677 "zoned": false, 00:11:13.677 "supported_io_types": { 00:11:13.677 "read": true, 00:11:13.677 "write": true, 00:11:13.677 "unmap": true, 00:11:13.677 "flush": true, 00:11:13.677 "reset": true, 00:11:13.677 "nvme_admin": false, 00:11:13.677 "nvme_io": false, 00:11:13.677 "nvme_io_md": false, 00:11:13.677 "write_zeroes": true, 00:11:13.677 "zcopy": false, 00:11:13.677 "get_zone_info": false, 00:11:13.677 "zone_management": false, 00:11:13.677 "zone_append": false, 00:11:13.677 "compare": false, 00:11:13.677 "compare_and_write": false, 00:11:13.677 "abort": false, 00:11:13.677 "seek_hole": false, 00:11:13.677 "seek_data": false, 00:11:13.677 "copy": false, 00:11:13.677 "nvme_iov_md": false 00:11:13.677 }, 00:11:13.677 "memory_domains": [ 00:11:13.677 { 00:11:13.677 "dma_device_id": "system", 00:11:13.677 "dma_device_type": 1 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.677 "dma_device_type": 2 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "system", 00:11:13.677 "dma_device_type": 1 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.677 "dma_device_type": 2 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "system", 00:11:13.677 "dma_device_type": 1 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.677 "dma_device_type": 2 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "system", 00:11:13.677 "dma_device_type": 1 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.677 "dma_device_type": 2 00:11:13.677 } 00:11:13.677 ], 00:11:13.677 "driver_specific": { 00:11:13.677 "raid": { 00:11:13.677 "uuid": "edcd3dbc-a203-4e37-aaea-8a13aa6299ab", 00:11:13.677 "strip_size_kb": 64, 00:11:13.677 "state": "online", 00:11:13.677 "raid_level": "concat", 00:11:13.677 "superblock": false, 00:11:13.677 "num_base_bdevs": 4, 00:11:13.677 "num_base_bdevs_discovered": 4, 00:11:13.677 "num_base_bdevs_operational": 4, 00:11:13.677 "base_bdevs_list": [ 00:11:13.677 { 00:11:13.677 "name": "BaseBdev1", 00:11:13.677 "uuid": "0285be1d-0007-40d4-b346-8105e8b9e8ca", 00:11:13.677 "is_configured": true, 00:11:13.677 "data_offset": 0, 00:11:13.677 "data_size": 65536 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "name": "BaseBdev2", 00:11:13.677 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:13.677 "is_configured": true, 00:11:13.677 "data_offset": 0, 00:11:13.677 "data_size": 65536 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "name": "BaseBdev3", 00:11:13.677 "uuid": "3b66dc1d-5c01-4513-8627-235428d0292e", 00:11:13.677 "is_configured": true, 00:11:13.677 "data_offset": 0, 00:11:13.677 "data_size": 65536 00:11:13.677 }, 00:11:13.677 { 00:11:13.677 "name": "BaseBdev4", 00:11:13.677 "uuid": "d92a60cf-2197-4508-ae86-6415a88931be", 00:11:13.677 "is_configured": true, 00:11:13.677 "data_offset": 0, 00:11:13.677 "data_size": 65536 00:11:13.677 } 00:11:13.677 ] 00:11:13.677 } 00:11:13.677 } 00:11:13.677 }' 00:11:13.677 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:13.937 BaseBdev2 00:11:13.937 BaseBdev3 00:11:13.937 BaseBdev4' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:13.937 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.938 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.938 [2024-12-12 05:49:21.416715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:13.938 [2024-12-12 05:49:21.416784] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.938 [2024-12-12 05:49:21.416852] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.198 "name": "Existed_Raid", 00:11:14.198 "uuid": "edcd3dbc-a203-4e37-aaea-8a13aa6299ab", 00:11:14.198 "strip_size_kb": 64, 00:11:14.198 "state": "offline", 00:11:14.198 "raid_level": "concat", 00:11:14.198 "superblock": false, 00:11:14.198 "num_base_bdevs": 4, 00:11:14.198 "num_base_bdevs_discovered": 3, 00:11:14.198 "num_base_bdevs_operational": 3, 00:11:14.198 "base_bdevs_list": [ 00:11:14.198 { 00:11:14.198 "name": null, 00:11:14.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.198 "is_configured": false, 00:11:14.198 "data_offset": 0, 00:11:14.198 "data_size": 65536 00:11:14.198 }, 00:11:14.198 { 00:11:14.198 "name": "BaseBdev2", 00:11:14.198 "uuid": "56f51283-b07a-42eb-9af2-1619eef8f2fa", 00:11:14.198 "is_configured": true, 00:11:14.198 "data_offset": 0, 00:11:14.198 "data_size": 65536 00:11:14.198 }, 00:11:14.198 { 00:11:14.198 "name": "BaseBdev3", 00:11:14.198 "uuid": "3b66dc1d-5c01-4513-8627-235428d0292e", 00:11:14.198 "is_configured": true, 00:11:14.198 "data_offset": 0, 00:11:14.198 "data_size": 65536 00:11:14.198 }, 00:11:14.198 { 00:11:14.198 "name": "BaseBdev4", 00:11:14.198 "uuid": "d92a60cf-2197-4508-ae86-6415a88931be", 00:11:14.198 "is_configured": true, 00:11:14.198 "data_offset": 0, 00:11:14.198 "data_size": 65536 00:11:14.198 } 00:11:14.198 ] 00:11:14.198 }' 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.198 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.458 05:49:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.458 [2024-12-12 05:49:21.969279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.718 [2024-12-12 05:49:22.113697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.718 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.978 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:14.978 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:14.978 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:14.978 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.978 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.978 [2024-12-12 05:49:22.252361] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:14.979 [2024-12-12 05:49:22.252455] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.979 BaseBdev2 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.979 [ 00:11:14.979 { 00:11:14.979 "name": "BaseBdev2", 00:11:14.979 "aliases": [ 00:11:14.979 "1ae95596-722a-4e1a-980c-dad04f4e5154" 00:11:14.979 ], 00:11:14.979 "product_name": "Malloc disk", 00:11:14.979 "block_size": 512, 00:11:14.979 "num_blocks": 65536, 00:11:14.979 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:14.979 "assigned_rate_limits": { 00:11:14.979 "rw_ios_per_sec": 0, 00:11:14.979 "rw_mbytes_per_sec": 0, 00:11:14.979 "r_mbytes_per_sec": 0, 00:11:14.979 "w_mbytes_per_sec": 0 00:11:14.979 }, 00:11:14.979 "claimed": false, 00:11:14.979 "zoned": false, 00:11:14.979 "supported_io_types": { 00:11:14.979 "read": true, 00:11:14.979 "write": true, 00:11:14.979 "unmap": true, 00:11:14.979 "flush": true, 00:11:14.979 "reset": true, 00:11:14.979 "nvme_admin": false, 00:11:14.979 "nvme_io": false, 00:11:14.979 "nvme_io_md": false, 00:11:14.979 "write_zeroes": true, 00:11:14.979 "zcopy": true, 00:11:14.979 "get_zone_info": false, 00:11:14.979 "zone_management": false, 00:11:14.979 "zone_append": false, 00:11:14.979 "compare": false, 00:11:14.979 "compare_and_write": false, 00:11:14.979 "abort": true, 00:11:14.979 "seek_hole": false, 00:11:14.979 "seek_data": false, 00:11:14.979 "copy": true, 00:11:14.979 "nvme_iov_md": false 00:11:14.979 }, 00:11:14.979 "memory_domains": [ 00:11:14.979 { 00:11:14.979 "dma_device_id": "system", 00:11:14.979 "dma_device_type": 1 00:11:14.979 }, 00:11:14.979 { 00:11:14.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.979 "dma_device_type": 2 00:11:14.979 } 00:11:14.979 ], 00:11:14.979 "driver_specific": {} 00:11:14.979 } 00:11:14.979 ] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.979 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 BaseBdev3 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 [ 00:11:15.240 { 00:11:15.240 "name": "BaseBdev3", 00:11:15.240 "aliases": [ 00:11:15.240 "c0f66f0d-99c0-4701-b8bc-94b492354d33" 00:11:15.240 ], 00:11:15.240 "product_name": "Malloc disk", 00:11:15.240 "block_size": 512, 00:11:15.240 "num_blocks": 65536, 00:11:15.240 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:15.240 "assigned_rate_limits": { 00:11:15.240 "rw_ios_per_sec": 0, 00:11:15.240 "rw_mbytes_per_sec": 0, 00:11:15.240 "r_mbytes_per_sec": 0, 00:11:15.240 "w_mbytes_per_sec": 0 00:11:15.240 }, 00:11:15.240 "claimed": false, 00:11:15.240 "zoned": false, 00:11:15.240 "supported_io_types": { 00:11:15.240 "read": true, 00:11:15.240 "write": true, 00:11:15.240 "unmap": true, 00:11:15.240 "flush": true, 00:11:15.240 "reset": true, 00:11:15.240 "nvme_admin": false, 00:11:15.240 "nvme_io": false, 00:11:15.240 "nvme_io_md": false, 00:11:15.240 "write_zeroes": true, 00:11:15.240 "zcopy": true, 00:11:15.240 "get_zone_info": false, 00:11:15.240 "zone_management": false, 00:11:15.240 "zone_append": false, 00:11:15.240 "compare": false, 00:11:15.240 "compare_and_write": false, 00:11:15.240 "abort": true, 00:11:15.240 "seek_hole": false, 00:11:15.240 "seek_data": false, 00:11:15.240 "copy": true, 00:11:15.240 "nvme_iov_md": false 00:11:15.240 }, 00:11:15.240 "memory_domains": [ 00:11:15.240 { 00:11:15.240 "dma_device_id": "system", 00:11:15.240 "dma_device_type": 1 00:11:15.240 }, 00:11:15.240 { 00:11:15.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.240 "dma_device_type": 2 00:11:15.240 } 00:11:15.240 ], 00:11:15.240 "driver_specific": {} 00:11:15.240 } 00:11:15.240 ] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 BaseBdev4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 [ 00:11:15.240 { 00:11:15.240 "name": "BaseBdev4", 00:11:15.240 "aliases": [ 00:11:15.240 "475368b0-0567-4b3e-8e29-a64857680e02" 00:11:15.240 ], 00:11:15.240 "product_name": "Malloc disk", 00:11:15.240 "block_size": 512, 00:11:15.240 "num_blocks": 65536, 00:11:15.240 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:15.240 "assigned_rate_limits": { 00:11:15.240 "rw_ios_per_sec": 0, 00:11:15.240 "rw_mbytes_per_sec": 0, 00:11:15.240 "r_mbytes_per_sec": 0, 00:11:15.240 "w_mbytes_per_sec": 0 00:11:15.240 }, 00:11:15.240 "claimed": false, 00:11:15.240 "zoned": false, 00:11:15.240 "supported_io_types": { 00:11:15.240 "read": true, 00:11:15.240 "write": true, 00:11:15.240 "unmap": true, 00:11:15.240 "flush": true, 00:11:15.240 "reset": true, 00:11:15.240 "nvme_admin": false, 00:11:15.240 "nvme_io": false, 00:11:15.240 "nvme_io_md": false, 00:11:15.240 "write_zeroes": true, 00:11:15.240 "zcopy": true, 00:11:15.240 "get_zone_info": false, 00:11:15.240 "zone_management": false, 00:11:15.240 "zone_append": false, 00:11:15.240 "compare": false, 00:11:15.240 "compare_and_write": false, 00:11:15.240 "abort": true, 00:11:15.240 "seek_hole": false, 00:11:15.240 "seek_data": false, 00:11:15.240 "copy": true, 00:11:15.240 "nvme_iov_md": false 00:11:15.240 }, 00:11:15.240 "memory_domains": [ 00:11:15.240 { 00:11:15.240 "dma_device_id": "system", 00:11:15.240 "dma_device_type": 1 00:11:15.240 }, 00:11:15.240 { 00:11:15.240 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.240 "dma_device_type": 2 00:11:15.240 } 00:11:15.240 ], 00:11:15.240 "driver_specific": {} 00:11:15.240 } 00:11:15.240 ] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 [2024-12-12 05:49:22.631219] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:15.240 [2024-12-12 05:49:22.631301] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:15.240 [2024-12-12 05:49:22.631345] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.240 [2024-12-12 05:49:22.633083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.240 [2024-12-12 05:49:22.633170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.240 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.240 "name": "Existed_Raid", 00:11:15.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.240 "strip_size_kb": 64, 00:11:15.240 "state": "configuring", 00:11:15.240 "raid_level": "concat", 00:11:15.240 "superblock": false, 00:11:15.241 "num_base_bdevs": 4, 00:11:15.241 "num_base_bdevs_discovered": 3, 00:11:15.241 "num_base_bdevs_operational": 4, 00:11:15.241 "base_bdevs_list": [ 00:11:15.241 { 00:11:15.241 "name": "BaseBdev1", 00:11:15.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.241 "is_configured": false, 00:11:15.241 "data_offset": 0, 00:11:15.241 "data_size": 0 00:11:15.241 }, 00:11:15.241 { 00:11:15.241 "name": "BaseBdev2", 00:11:15.241 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:15.241 "is_configured": true, 00:11:15.241 "data_offset": 0, 00:11:15.241 "data_size": 65536 00:11:15.241 }, 00:11:15.241 { 00:11:15.241 "name": "BaseBdev3", 00:11:15.241 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:15.241 "is_configured": true, 00:11:15.241 "data_offset": 0, 00:11:15.241 "data_size": 65536 00:11:15.241 }, 00:11:15.241 { 00:11:15.241 "name": "BaseBdev4", 00:11:15.241 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:15.241 "is_configured": true, 00:11:15.241 "data_offset": 0, 00:11:15.241 "data_size": 65536 00:11:15.241 } 00:11:15.241 ] 00:11:15.241 }' 00:11:15.241 05:49:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.241 05:49:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.530 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:15.530 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.530 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.530 [2024-12-12 05:49:23.046527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:15.530 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.790 "name": "Existed_Raid", 00:11:15.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.790 "strip_size_kb": 64, 00:11:15.790 "state": "configuring", 00:11:15.790 "raid_level": "concat", 00:11:15.790 "superblock": false, 00:11:15.790 "num_base_bdevs": 4, 00:11:15.790 "num_base_bdevs_discovered": 2, 00:11:15.790 "num_base_bdevs_operational": 4, 00:11:15.790 "base_bdevs_list": [ 00:11:15.790 { 00:11:15.790 "name": "BaseBdev1", 00:11:15.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.790 "is_configured": false, 00:11:15.790 "data_offset": 0, 00:11:15.790 "data_size": 0 00:11:15.790 }, 00:11:15.790 { 00:11:15.790 "name": null, 00:11:15.790 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:15.790 "is_configured": false, 00:11:15.790 "data_offset": 0, 00:11:15.790 "data_size": 65536 00:11:15.790 }, 00:11:15.790 { 00:11:15.790 "name": "BaseBdev3", 00:11:15.790 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:15.790 "is_configured": true, 00:11:15.790 "data_offset": 0, 00:11:15.790 "data_size": 65536 00:11:15.790 }, 00:11:15.790 { 00:11:15.790 "name": "BaseBdev4", 00:11:15.790 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:15.790 "is_configured": true, 00:11:15.790 "data_offset": 0, 00:11:15.790 "data_size": 65536 00:11:15.790 } 00:11:15.790 ] 00:11:15.790 }' 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.790 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 [2024-12-12 05:49:23.557593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:16.050 BaseBdev1 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:16.050 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.311 [ 00:11:16.311 { 00:11:16.311 "name": "BaseBdev1", 00:11:16.311 "aliases": [ 00:11:16.311 "aeda5b83-806c-45b4-a61c-be42c1ad6738" 00:11:16.311 ], 00:11:16.311 "product_name": "Malloc disk", 00:11:16.311 "block_size": 512, 00:11:16.311 "num_blocks": 65536, 00:11:16.311 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:16.311 "assigned_rate_limits": { 00:11:16.311 "rw_ios_per_sec": 0, 00:11:16.311 "rw_mbytes_per_sec": 0, 00:11:16.311 "r_mbytes_per_sec": 0, 00:11:16.311 "w_mbytes_per_sec": 0 00:11:16.311 }, 00:11:16.311 "claimed": true, 00:11:16.311 "claim_type": "exclusive_write", 00:11:16.311 "zoned": false, 00:11:16.311 "supported_io_types": { 00:11:16.311 "read": true, 00:11:16.311 "write": true, 00:11:16.311 "unmap": true, 00:11:16.311 "flush": true, 00:11:16.311 "reset": true, 00:11:16.311 "nvme_admin": false, 00:11:16.311 "nvme_io": false, 00:11:16.311 "nvme_io_md": false, 00:11:16.311 "write_zeroes": true, 00:11:16.311 "zcopy": true, 00:11:16.311 "get_zone_info": false, 00:11:16.311 "zone_management": false, 00:11:16.311 "zone_append": false, 00:11:16.311 "compare": false, 00:11:16.311 "compare_and_write": false, 00:11:16.311 "abort": true, 00:11:16.311 "seek_hole": false, 00:11:16.311 "seek_data": false, 00:11:16.311 "copy": true, 00:11:16.311 "nvme_iov_md": false 00:11:16.311 }, 00:11:16.311 "memory_domains": [ 00:11:16.311 { 00:11:16.311 "dma_device_id": "system", 00:11:16.311 "dma_device_type": 1 00:11:16.311 }, 00:11:16.311 { 00:11:16.311 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.311 "dma_device_type": 2 00:11:16.311 } 00:11:16.311 ], 00:11:16.311 "driver_specific": {} 00:11:16.311 } 00:11:16.311 ] 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.311 "name": "Existed_Raid", 00:11:16.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.311 "strip_size_kb": 64, 00:11:16.311 "state": "configuring", 00:11:16.311 "raid_level": "concat", 00:11:16.311 "superblock": false, 00:11:16.311 "num_base_bdevs": 4, 00:11:16.311 "num_base_bdevs_discovered": 3, 00:11:16.311 "num_base_bdevs_operational": 4, 00:11:16.311 "base_bdevs_list": [ 00:11:16.311 { 00:11:16.311 "name": "BaseBdev1", 00:11:16.311 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:16.311 "is_configured": true, 00:11:16.311 "data_offset": 0, 00:11:16.311 "data_size": 65536 00:11:16.311 }, 00:11:16.311 { 00:11:16.311 "name": null, 00:11:16.311 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:16.311 "is_configured": false, 00:11:16.311 "data_offset": 0, 00:11:16.311 "data_size": 65536 00:11:16.311 }, 00:11:16.311 { 00:11:16.311 "name": "BaseBdev3", 00:11:16.311 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:16.311 "is_configured": true, 00:11:16.311 "data_offset": 0, 00:11:16.311 "data_size": 65536 00:11:16.311 }, 00:11:16.311 { 00:11:16.311 "name": "BaseBdev4", 00:11:16.311 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:16.311 "is_configured": true, 00:11:16.311 "data_offset": 0, 00:11:16.311 "data_size": 65536 00:11:16.311 } 00:11:16.311 ] 00:11:16.311 }' 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.311 05:49:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.571 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:16.571 05:49:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.571 [2024-12-12 05:49:24.032847] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.571 "name": "Existed_Raid", 00:11:16.571 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.571 "strip_size_kb": 64, 00:11:16.571 "state": "configuring", 00:11:16.571 "raid_level": "concat", 00:11:16.571 "superblock": false, 00:11:16.571 "num_base_bdevs": 4, 00:11:16.571 "num_base_bdevs_discovered": 2, 00:11:16.571 "num_base_bdevs_operational": 4, 00:11:16.571 "base_bdevs_list": [ 00:11:16.571 { 00:11:16.571 "name": "BaseBdev1", 00:11:16.571 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:16.571 "is_configured": true, 00:11:16.571 "data_offset": 0, 00:11:16.571 "data_size": 65536 00:11:16.571 }, 00:11:16.571 { 00:11:16.571 "name": null, 00:11:16.571 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:16.571 "is_configured": false, 00:11:16.571 "data_offset": 0, 00:11:16.571 "data_size": 65536 00:11:16.571 }, 00:11:16.571 { 00:11:16.571 "name": null, 00:11:16.571 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:16.571 "is_configured": false, 00:11:16.571 "data_offset": 0, 00:11:16.571 "data_size": 65536 00:11:16.571 }, 00:11:16.571 { 00:11:16.571 "name": "BaseBdev4", 00:11:16.571 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:16.571 "is_configured": true, 00:11:16.571 "data_offset": 0, 00:11:16.571 "data_size": 65536 00:11:16.571 } 00:11:16.571 ] 00:11:16.571 }' 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.571 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.141 [2024-12-12 05:49:24.496012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.141 "name": "Existed_Raid", 00:11:17.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.141 "strip_size_kb": 64, 00:11:17.141 "state": "configuring", 00:11:17.141 "raid_level": "concat", 00:11:17.141 "superblock": false, 00:11:17.141 "num_base_bdevs": 4, 00:11:17.141 "num_base_bdevs_discovered": 3, 00:11:17.141 "num_base_bdevs_operational": 4, 00:11:17.141 "base_bdevs_list": [ 00:11:17.141 { 00:11:17.141 "name": "BaseBdev1", 00:11:17.141 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:17.141 "is_configured": true, 00:11:17.141 "data_offset": 0, 00:11:17.141 "data_size": 65536 00:11:17.141 }, 00:11:17.141 { 00:11:17.141 "name": null, 00:11:17.141 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:17.141 "is_configured": false, 00:11:17.141 "data_offset": 0, 00:11:17.141 "data_size": 65536 00:11:17.141 }, 00:11:17.141 { 00:11:17.141 "name": "BaseBdev3", 00:11:17.141 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:17.141 "is_configured": true, 00:11:17.141 "data_offset": 0, 00:11:17.141 "data_size": 65536 00:11:17.141 }, 00:11:17.141 { 00:11:17.141 "name": "BaseBdev4", 00:11:17.141 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:17.141 "is_configured": true, 00:11:17.141 "data_offset": 0, 00:11:17.141 "data_size": 65536 00:11:17.141 } 00:11:17.141 ] 00:11:17.141 }' 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.141 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.407 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.407 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.407 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.407 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:17.689 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.689 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:17.689 05:49:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:17.689 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.689 05:49:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.689 [2024-12-12 05:49:24.947267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.689 "name": "Existed_Raid", 00:11:17.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.689 "strip_size_kb": 64, 00:11:17.689 "state": "configuring", 00:11:17.689 "raid_level": "concat", 00:11:17.689 "superblock": false, 00:11:17.689 "num_base_bdevs": 4, 00:11:17.689 "num_base_bdevs_discovered": 2, 00:11:17.689 "num_base_bdevs_operational": 4, 00:11:17.689 "base_bdevs_list": [ 00:11:17.689 { 00:11:17.689 "name": null, 00:11:17.689 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:17.689 "is_configured": false, 00:11:17.689 "data_offset": 0, 00:11:17.689 "data_size": 65536 00:11:17.689 }, 00:11:17.689 { 00:11:17.689 "name": null, 00:11:17.689 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:17.689 "is_configured": false, 00:11:17.689 "data_offset": 0, 00:11:17.689 "data_size": 65536 00:11:17.689 }, 00:11:17.689 { 00:11:17.689 "name": "BaseBdev3", 00:11:17.689 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:17.689 "is_configured": true, 00:11:17.689 "data_offset": 0, 00:11:17.689 "data_size": 65536 00:11:17.689 }, 00:11:17.689 { 00:11:17.689 "name": "BaseBdev4", 00:11:17.689 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:17.689 "is_configured": true, 00:11:17.689 "data_offset": 0, 00:11:17.689 "data_size": 65536 00:11:17.689 } 00:11:17.689 ] 00:11:17.689 }' 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.689 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.258 [2024-12-12 05:49:25.523788] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.258 "name": "Existed_Raid", 00:11:18.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.258 "strip_size_kb": 64, 00:11:18.258 "state": "configuring", 00:11:18.258 "raid_level": "concat", 00:11:18.258 "superblock": false, 00:11:18.258 "num_base_bdevs": 4, 00:11:18.258 "num_base_bdevs_discovered": 3, 00:11:18.258 "num_base_bdevs_operational": 4, 00:11:18.258 "base_bdevs_list": [ 00:11:18.258 { 00:11:18.258 "name": null, 00:11:18.258 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:18.258 "is_configured": false, 00:11:18.258 "data_offset": 0, 00:11:18.258 "data_size": 65536 00:11:18.258 }, 00:11:18.258 { 00:11:18.258 "name": "BaseBdev2", 00:11:18.258 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:18.258 "is_configured": true, 00:11:18.258 "data_offset": 0, 00:11:18.258 "data_size": 65536 00:11:18.258 }, 00:11:18.258 { 00:11:18.258 "name": "BaseBdev3", 00:11:18.258 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:18.258 "is_configured": true, 00:11:18.258 "data_offset": 0, 00:11:18.258 "data_size": 65536 00:11:18.258 }, 00:11:18.258 { 00:11:18.258 "name": "BaseBdev4", 00:11:18.258 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:18.258 "is_configured": true, 00:11:18.258 "data_offset": 0, 00:11:18.258 "data_size": 65536 00:11:18.258 } 00:11:18.258 ] 00:11:18.258 }' 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.258 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aeda5b83-806c-45b4-a61c-be42c1ad6738 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.518 05:49:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 [2024-12-12 05:49:26.023053] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:18.518 [2024-12-12 05:49:26.023184] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:18.518 [2024-12-12 05:49:26.023208] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:18.518 [2024-12-12 05:49:26.023538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:18.518 [2024-12-12 05:49:26.023741] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:18.518 [2024-12-12 05:49:26.023785] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:18.518 NewBaseBdev 00:11:18.518 [2024-12-12 05:49:26.024106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.518 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:18.519 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.519 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.779 [ 00:11:18.779 { 00:11:18.779 "name": "NewBaseBdev", 00:11:18.779 "aliases": [ 00:11:18.779 "aeda5b83-806c-45b4-a61c-be42c1ad6738" 00:11:18.779 ], 00:11:18.779 "product_name": "Malloc disk", 00:11:18.779 "block_size": 512, 00:11:18.779 "num_blocks": 65536, 00:11:18.779 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:18.779 "assigned_rate_limits": { 00:11:18.779 "rw_ios_per_sec": 0, 00:11:18.779 "rw_mbytes_per_sec": 0, 00:11:18.779 "r_mbytes_per_sec": 0, 00:11:18.779 "w_mbytes_per_sec": 0 00:11:18.779 }, 00:11:18.779 "claimed": true, 00:11:18.779 "claim_type": "exclusive_write", 00:11:18.779 "zoned": false, 00:11:18.779 "supported_io_types": { 00:11:18.779 "read": true, 00:11:18.779 "write": true, 00:11:18.779 "unmap": true, 00:11:18.779 "flush": true, 00:11:18.779 "reset": true, 00:11:18.779 "nvme_admin": false, 00:11:18.779 "nvme_io": false, 00:11:18.779 "nvme_io_md": false, 00:11:18.779 "write_zeroes": true, 00:11:18.779 "zcopy": true, 00:11:18.779 "get_zone_info": false, 00:11:18.779 "zone_management": false, 00:11:18.779 "zone_append": false, 00:11:18.779 "compare": false, 00:11:18.779 "compare_and_write": false, 00:11:18.779 "abort": true, 00:11:18.779 "seek_hole": false, 00:11:18.779 "seek_data": false, 00:11:18.779 "copy": true, 00:11:18.779 "nvme_iov_md": false 00:11:18.779 }, 00:11:18.779 "memory_domains": [ 00:11:18.779 { 00:11:18.779 "dma_device_id": "system", 00:11:18.779 "dma_device_type": 1 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.779 "dma_device_type": 2 00:11:18.779 } 00:11:18.779 ], 00:11:18.779 "driver_specific": {} 00:11:18.779 } 00:11:18.779 ] 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.779 "name": "Existed_Raid", 00:11:18.779 "uuid": "55bdd313-9b2b-4a7b-a912-7a84cbacfab0", 00:11:18.779 "strip_size_kb": 64, 00:11:18.779 "state": "online", 00:11:18.779 "raid_level": "concat", 00:11:18.779 "superblock": false, 00:11:18.779 "num_base_bdevs": 4, 00:11:18.779 "num_base_bdevs_discovered": 4, 00:11:18.779 "num_base_bdevs_operational": 4, 00:11:18.779 "base_bdevs_list": [ 00:11:18.779 { 00:11:18.779 "name": "NewBaseBdev", 00:11:18.779 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev2", 00:11:18.779 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev3", 00:11:18.779 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 }, 00:11:18.779 { 00:11:18.779 "name": "BaseBdev4", 00:11:18.779 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:18.779 "is_configured": true, 00:11:18.779 "data_offset": 0, 00:11:18.779 "data_size": 65536 00:11:18.779 } 00:11:18.779 ] 00:11:18.779 }' 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.779 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.040 [2024-12-12 05:49:26.462785] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:19.040 "name": "Existed_Raid", 00:11:19.040 "aliases": [ 00:11:19.040 "55bdd313-9b2b-4a7b-a912-7a84cbacfab0" 00:11:19.040 ], 00:11:19.040 "product_name": "Raid Volume", 00:11:19.040 "block_size": 512, 00:11:19.040 "num_blocks": 262144, 00:11:19.040 "uuid": "55bdd313-9b2b-4a7b-a912-7a84cbacfab0", 00:11:19.040 "assigned_rate_limits": { 00:11:19.040 "rw_ios_per_sec": 0, 00:11:19.040 "rw_mbytes_per_sec": 0, 00:11:19.040 "r_mbytes_per_sec": 0, 00:11:19.040 "w_mbytes_per_sec": 0 00:11:19.040 }, 00:11:19.040 "claimed": false, 00:11:19.040 "zoned": false, 00:11:19.040 "supported_io_types": { 00:11:19.040 "read": true, 00:11:19.040 "write": true, 00:11:19.040 "unmap": true, 00:11:19.040 "flush": true, 00:11:19.040 "reset": true, 00:11:19.040 "nvme_admin": false, 00:11:19.040 "nvme_io": false, 00:11:19.040 "nvme_io_md": false, 00:11:19.040 "write_zeroes": true, 00:11:19.040 "zcopy": false, 00:11:19.040 "get_zone_info": false, 00:11:19.040 "zone_management": false, 00:11:19.040 "zone_append": false, 00:11:19.040 "compare": false, 00:11:19.040 "compare_and_write": false, 00:11:19.040 "abort": false, 00:11:19.040 "seek_hole": false, 00:11:19.040 "seek_data": false, 00:11:19.040 "copy": false, 00:11:19.040 "nvme_iov_md": false 00:11:19.040 }, 00:11:19.040 "memory_domains": [ 00:11:19.040 { 00:11:19.040 "dma_device_id": "system", 00:11:19.040 "dma_device_type": 1 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.040 "dma_device_type": 2 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "system", 00:11:19.040 "dma_device_type": 1 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.040 "dma_device_type": 2 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "system", 00:11:19.040 "dma_device_type": 1 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.040 "dma_device_type": 2 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "system", 00:11:19.040 "dma_device_type": 1 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.040 "dma_device_type": 2 00:11:19.040 } 00:11:19.040 ], 00:11:19.040 "driver_specific": { 00:11:19.040 "raid": { 00:11:19.040 "uuid": "55bdd313-9b2b-4a7b-a912-7a84cbacfab0", 00:11:19.040 "strip_size_kb": 64, 00:11:19.040 "state": "online", 00:11:19.040 "raid_level": "concat", 00:11:19.040 "superblock": false, 00:11:19.040 "num_base_bdevs": 4, 00:11:19.040 "num_base_bdevs_discovered": 4, 00:11:19.040 "num_base_bdevs_operational": 4, 00:11:19.040 "base_bdevs_list": [ 00:11:19.040 { 00:11:19.040 "name": "NewBaseBdev", 00:11:19.040 "uuid": "aeda5b83-806c-45b4-a61c-be42c1ad6738", 00:11:19.040 "is_configured": true, 00:11:19.040 "data_offset": 0, 00:11:19.040 "data_size": 65536 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "name": "BaseBdev2", 00:11:19.040 "uuid": "1ae95596-722a-4e1a-980c-dad04f4e5154", 00:11:19.040 "is_configured": true, 00:11:19.040 "data_offset": 0, 00:11:19.040 "data_size": 65536 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "name": "BaseBdev3", 00:11:19.040 "uuid": "c0f66f0d-99c0-4701-b8bc-94b492354d33", 00:11:19.040 "is_configured": true, 00:11:19.040 "data_offset": 0, 00:11:19.040 "data_size": 65536 00:11:19.040 }, 00:11:19.040 { 00:11:19.040 "name": "BaseBdev4", 00:11:19.040 "uuid": "475368b0-0567-4b3e-8e29-a64857680e02", 00:11:19.040 "is_configured": true, 00:11:19.040 "data_offset": 0, 00:11:19.040 "data_size": 65536 00:11:19.040 } 00:11:19.040 ] 00:11:19.040 } 00:11:19.040 } 00:11:19.040 }' 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:19.040 BaseBdev2 00:11:19.040 BaseBdev3 00:11:19.040 BaseBdev4' 00:11:19.040 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:19.300 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.301 [2024-12-12 05:49:26.761895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:19.301 [2024-12-12 05:49:26.761930] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:19.301 [2024-12-12 05:49:26.762011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.301 [2024-12-12 05:49:26.762079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.301 [2024-12-12 05:49:26.762089] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 72180 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 72180 ']' 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 72180 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72180 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:19.301 killing process with pid 72180 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72180' 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 72180 00:11:19.301 [2024-12-12 05:49:26.810702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.301 05:49:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 72180 00:11:19.870 [2024-12-12 05:49:27.192252] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.809 05:49:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:20.809 00:11:20.809 real 0m11.021s 00:11:20.809 user 0m17.465s 00:11:20.809 sys 0m1.980s 00:11:20.809 05:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.809 05:49:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.809 ************************************ 00:11:20.809 END TEST raid_state_function_test 00:11:20.809 ************************************ 00:11:21.070 05:49:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:21.070 05:49:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:21.070 05:49:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.070 05:49:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 ************************************ 00:11:21.070 START TEST raid_state_function_test_sb 00:11:21.070 ************************************ 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72854 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:21.070 Process raid pid: 72854 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72854' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72854 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 72854 ']' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.070 05:49:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.070 [2024-12-12 05:49:28.459385] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:21.070 [2024-12-12 05:49:28.459508] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.330 [2024-12-12 05:49:28.631567] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:21.330 [2024-12-12 05:49:28.743914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.589 [2024-12-12 05:49:28.947357] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.590 [2024-12-12 05:49:28.947407] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.900 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.900 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:21.900 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:21.900 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.900 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.900 [2024-12-12 05:49:29.284534] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:21.900 [2024-12-12 05:49:29.284581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:21.900 [2024-12-12 05:49:29.284591] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:21.901 [2024-12-12 05:49:29.284600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:21.901 [2024-12-12 05:49:29.284606] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:21.901 [2024-12-12 05:49:29.284615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:21.901 [2024-12-12 05:49:29.284625] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:21.901 [2024-12-12 05:49:29.284634] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.901 "name": "Existed_Raid", 00:11:21.901 "uuid": "10ed43bf-651c-49f9-9bf1-42803f7e376f", 00:11:21.901 "strip_size_kb": 64, 00:11:21.901 "state": "configuring", 00:11:21.901 "raid_level": "concat", 00:11:21.901 "superblock": true, 00:11:21.901 "num_base_bdevs": 4, 00:11:21.901 "num_base_bdevs_discovered": 0, 00:11:21.901 "num_base_bdevs_operational": 4, 00:11:21.901 "base_bdevs_list": [ 00:11:21.901 { 00:11:21.901 "name": "BaseBdev1", 00:11:21.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.901 "is_configured": false, 00:11:21.901 "data_offset": 0, 00:11:21.901 "data_size": 0 00:11:21.901 }, 00:11:21.901 { 00:11:21.901 "name": "BaseBdev2", 00:11:21.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.901 "is_configured": false, 00:11:21.901 "data_offset": 0, 00:11:21.901 "data_size": 0 00:11:21.901 }, 00:11:21.901 { 00:11:21.901 "name": "BaseBdev3", 00:11:21.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.901 "is_configured": false, 00:11:21.901 "data_offset": 0, 00:11:21.901 "data_size": 0 00:11:21.901 }, 00:11:21.901 { 00:11:21.901 "name": "BaseBdev4", 00:11:21.901 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.901 "is_configured": false, 00:11:21.901 "data_offset": 0, 00:11:21.901 "data_size": 0 00:11:21.901 } 00:11:21.901 ] 00:11:21.901 }' 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.901 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 [2024-12-12 05:49:29.731676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.485 [2024-12-12 05:49:29.731719] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 [2024-12-12 05:49:29.743664] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:22.485 [2024-12-12 05:49:29.743707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:22.485 [2024-12-12 05:49:29.743715] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.485 [2024-12-12 05:49:29.743724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.485 [2024-12-12 05:49:29.743730] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.485 [2024-12-12 05:49:29.743738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.485 [2024-12-12 05:49:29.743744] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.485 [2024-12-12 05:49:29.743752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 [2024-12-12 05:49:29.791318] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.485 BaseBdev1 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.485 [ 00:11:22.485 { 00:11:22.485 "name": "BaseBdev1", 00:11:22.485 "aliases": [ 00:11:22.485 "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9" 00:11:22.485 ], 00:11:22.485 "product_name": "Malloc disk", 00:11:22.485 "block_size": 512, 00:11:22.485 "num_blocks": 65536, 00:11:22.485 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:22.485 "assigned_rate_limits": { 00:11:22.485 "rw_ios_per_sec": 0, 00:11:22.485 "rw_mbytes_per_sec": 0, 00:11:22.485 "r_mbytes_per_sec": 0, 00:11:22.485 "w_mbytes_per_sec": 0 00:11:22.485 }, 00:11:22.485 "claimed": true, 00:11:22.485 "claim_type": "exclusive_write", 00:11:22.485 "zoned": false, 00:11:22.485 "supported_io_types": { 00:11:22.485 "read": true, 00:11:22.485 "write": true, 00:11:22.485 "unmap": true, 00:11:22.485 "flush": true, 00:11:22.485 "reset": true, 00:11:22.485 "nvme_admin": false, 00:11:22.485 "nvme_io": false, 00:11:22.485 "nvme_io_md": false, 00:11:22.485 "write_zeroes": true, 00:11:22.485 "zcopy": true, 00:11:22.485 "get_zone_info": false, 00:11:22.485 "zone_management": false, 00:11:22.485 "zone_append": false, 00:11:22.485 "compare": false, 00:11:22.485 "compare_and_write": false, 00:11:22.485 "abort": true, 00:11:22.485 "seek_hole": false, 00:11:22.485 "seek_data": false, 00:11:22.485 "copy": true, 00:11:22.485 "nvme_iov_md": false 00:11:22.485 }, 00:11:22.485 "memory_domains": [ 00:11:22.485 { 00:11:22.485 "dma_device_id": "system", 00:11:22.485 "dma_device_type": 1 00:11:22.485 }, 00:11:22.485 { 00:11:22.485 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.485 "dma_device_type": 2 00:11:22.485 } 00:11:22.485 ], 00:11:22.485 "driver_specific": {} 00:11:22.485 } 00:11:22.485 ] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.485 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.486 "name": "Existed_Raid", 00:11:22.486 "uuid": "4c3ab9c9-5d1b-41c3-815f-0ceada7280e5", 00:11:22.486 "strip_size_kb": 64, 00:11:22.486 "state": "configuring", 00:11:22.486 "raid_level": "concat", 00:11:22.486 "superblock": true, 00:11:22.486 "num_base_bdevs": 4, 00:11:22.486 "num_base_bdevs_discovered": 1, 00:11:22.486 "num_base_bdevs_operational": 4, 00:11:22.486 "base_bdevs_list": [ 00:11:22.486 { 00:11:22.486 "name": "BaseBdev1", 00:11:22.486 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:22.486 "is_configured": true, 00:11:22.486 "data_offset": 2048, 00:11:22.486 "data_size": 63488 00:11:22.486 }, 00:11:22.486 { 00:11:22.486 "name": "BaseBdev2", 00:11:22.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.486 "is_configured": false, 00:11:22.486 "data_offset": 0, 00:11:22.486 "data_size": 0 00:11:22.486 }, 00:11:22.486 { 00:11:22.486 "name": "BaseBdev3", 00:11:22.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.486 "is_configured": false, 00:11:22.486 "data_offset": 0, 00:11:22.486 "data_size": 0 00:11:22.486 }, 00:11:22.486 { 00:11:22.486 "name": "BaseBdev4", 00:11:22.486 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:22.486 "is_configured": false, 00:11:22.486 "data_offset": 0, 00:11:22.486 "data_size": 0 00:11:22.486 } 00:11:22.486 ] 00:11:22.486 }' 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.486 05:49:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.746 [2024-12-12 05:49:30.234600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.746 [2024-12-12 05:49:30.234659] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.746 [2024-12-12 05:49:30.246641] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:22.746 [2024-12-12 05:49:30.248403] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:22.746 [2024-12-12 05:49:30.248446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:22.746 [2024-12-12 05:49:30.248456] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:22.746 [2024-12-12 05:49:30.248465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:22.746 [2024-12-12 05:49:30.248488] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:22.746 [2024-12-12 05:49:30.248497] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.746 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.005 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.005 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.005 "name": "Existed_Raid", 00:11:23.005 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:23.005 "strip_size_kb": 64, 00:11:23.005 "state": "configuring", 00:11:23.005 "raid_level": "concat", 00:11:23.005 "superblock": true, 00:11:23.005 "num_base_bdevs": 4, 00:11:23.005 "num_base_bdevs_discovered": 1, 00:11:23.005 "num_base_bdevs_operational": 4, 00:11:23.005 "base_bdevs_list": [ 00:11:23.005 { 00:11:23.005 "name": "BaseBdev1", 00:11:23.005 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:23.005 "is_configured": true, 00:11:23.005 "data_offset": 2048, 00:11:23.005 "data_size": 63488 00:11:23.005 }, 00:11:23.005 { 00:11:23.005 "name": "BaseBdev2", 00:11:23.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.005 "is_configured": false, 00:11:23.005 "data_offset": 0, 00:11:23.005 "data_size": 0 00:11:23.005 }, 00:11:23.005 { 00:11:23.005 "name": "BaseBdev3", 00:11:23.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.005 "is_configured": false, 00:11:23.005 "data_offset": 0, 00:11:23.005 "data_size": 0 00:11:23.005 }, 00:11:23.005 { 00:11:23.005 "name": "BaseBdev4", 00:11:23.005 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.005 "is_configured": false, 00:11:23.005 "data_offset": 0, 00:11:23.005 "data_size": 0 00:11:23.005 } 00:11:23.005 ] 00:11:23.005 }' 00:11:23.005 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.005 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 [2024-12-12 05:49:30.711577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.265 BaseBdev2 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.265 [ 00:11:23.265 { 00:11:23.265 "name": "BaseBdev2", 00:11:23.265 "aliases": [ 00:11:23.265 "d2216823-44ee-4eb3-bfee-f7e5533d8ff4" 00:11:23.265 ], 00:11:23.265 "product_name": "Malloc disk", 00:11:23.265 "block_size": 512, 00:11:23.265 "num_blocks": 65536, 00:11:23.265 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:23.265 "assigned_rate_limits": { 00:11:23.265 "rw_ios_per_sec": 0, 00:11:23.265 "rw_mbytes_per_sec": 0, 00:11:23.265 "r_mbytes_per_sec": 0, 00:11:23.265 "w_mbytes_per_sec": 0 00:11:23.265 }, 00:11:23.265 "claimed": true, 00:11:23.265 "claim_type": "exclusive_write", 00:11:23.265 "zoned": false, 00:11:23.265 "supported_io_types": { 00:11:23.265 "read": true, 00:11:23.265 "write": true, 00:11:23.265 "unmap": true, 00:11:23.265 "flush": true, 00:11:23.265 "reset": true, 00:11:23.265 "nvme_admin": false, 00:11:23.265 "nvme_io": false, 00:11:23.265 "nvme_io_md": false, 00:11:23.265 "write_zeroes": true, 00:11:23.265 "zcopy": true, 00:11:23.265 "get_zone_info": false, 00:11:23.265 "zone_management": false, 00:11:23.265 "zone_append": false, 00:11:23.265 "compare": false, 00:11:23.265 "compare_and_write": false, 00:11:23.265 "abort": true, 00:11:23.265 "seek_hole": false, 00:11:23.265 "seek_data": false, 00:11:23.265 "copy": true, 00:11:23.265 "nvme_iov_md": false 00:11:23.265 }, 00:11:23.265 "memory_domains": [ 00:11:23.265 { 00:11:23.265 "dma_device_id": "system", 00:11:23.265 "dma_device_type": 1 00:11:23.265 }, 00:11:23.265 { 00:11:23.265 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.265 "dma_device_type": 2 00:11:23.265 } 00:11:23.265 ], 00:11:23.265 "driver_specific": {} 00:11:23.265 } 00:11:23.265 ] 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.265 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.266 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.526 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.526 "name": "Existed_Raid", 00:11:23.526 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:23.526 "strip_size_kb": 64, 00:11:23.526 "state": "configuring", 00:11:23.526 "raid_level": "concat", 00:11:23.526 "superblock": true, 00:11:23.526 "num_base_bdevs": 4, 00:11:23.526 "num_base_bdevs_discovered": 2, 00:11:23.526 "num_base_bdevs_operational": 4, 00:11:23.526 "base_bdevs_list": [ 00:11:23.526 { 00:11:23.526 "name": "BaseBdev1", 00:11:23.526 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:23.526 "is_configured": true, 00:11:23.526 "data_offset": 2048, 00:11:23.526 "data_size": 63488 00:11:23.526 }, 00:11:23.526 { 00:11:23.526 "name": "BaseBdev2", 00:11:23.526 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:23.526 "is_configured": true, 00:11:23.526 "data_offset": 2048, 00:11:23.526 "data_size": 63488 00:11:23.526 }, 00:11:23.526 { 00:11:23.526 "name": "BaseBdev3", 00:11:23.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.526 "is_configured": false, 00:11:23.526 "data_offset": 0, 00:11:23.526 "data_size": 0 00:11:23.526 }, 00:11:23.526 { 00:11:23.526 "name": "BaseBdev4", 00:11:23.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.526 "is_configured": false, 00:11:23.526 "data_offset": 0, 00:11:23.526 "data_size": 0 00:11:23.526 } 00:11:23.526 ] 00:11:23.526 }' 00:11:23.526 05:49:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.526 05:49:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.786 [2024-12-12 05:49:31.209403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.786 BaseBdev3 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:23.786 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.787 [ 00:11:23.787 { 00:11:23.787 "name": "BaseBdev3", 00:11:23.787 "aliases": [ 00:11:23.787 "53b40d69-254b-465e-95a1-371a64f5d0ee" 00:11:23.787 ], 00:11:23.787 "product_name": "Malloc disk", 00:11:23.787 "block_size": 512, 00:11:23.787 "num_blocks": 65536, 00:11:23.787 "uuid": "53b40d69-254b-465e-95a1-371a64f5d0ee", 00:11:23.787 "assigned_rate_limits": { 00:11:23.787 "rw_ios_per_sec": 0, 00:11:23.787 "rw_mbytes_per_sec": 0, 00:11:23.787 "r_mbytes_per_sec": 0, 00:11:23.787 "w_mbytes_per_sec": 0 00:11:23.787 }, 00:11:23.787 "claimed": true, 00:11:23.787 "claim_type": "exclusive_write", 00:11:23.787 "zoned": false, 00:11:23.787 "supported_io_types": { 00:11:23.787 "read": true, 00:11:23.787 "write": true, 00:11:23.787 "unmap": true, 00:11:23.787 "flush": true, 00:11:23.787 "reset": true, 00:11:23.787 "nvme_admin": false, 00:11:23.787 "nvme_io": false, 00:11:23.787 "nvme_io_md": false, 00:11:23.787 "write_zeroes": true, 00:11:23.787 "zcopy": true, 00:11:23.787 "get_zone_info": false, 00:11:23.787 "zone_management": false, 00:11:23.787 "zone_append": false, 00:11:23.787 "compare": false, 00:11:23.787 "compare_and_write": false, 00:11:23.787 "abort": true, 00:11:23.787 "seek_hole": false, 00:11:23.787 "seek_data": false, 00:11:23.787 "copy": true, 00:11:23.787 "nvme_iov_md": false 00:11:23.787 }, 00:11:23.787 "memory_domains": [ 00:11:23.787 { 00:11:23.787 "dma_device_id": "system", 00:11:23.787 "dma_device_type": 1 00:11:23.787 }, 00:11:23.787 { 00:11:23.787 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:23.787 "dma_device_type": 2 00:11:23.787 } 00:11:23.787 ], 00:11:23.787 "driver_specific": {} 00:11:23.787 } 00:11:23.787 ] 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:23.787 "name": "Existed_Raid", 00:11:23.787 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:23.787 "strip_size_kb": 64, 00:11:23.787 "state": "configuring", 00:11:23.787 "raid_level": "concat", 00:11:23.787 "superblock": true, 00:11:23.787 "num_base_bdevs": 4, 00:11:23.787 "num_base_bdevs_discovered": 3, 00:11:23.787 "num_base_bdevs_operational": 4, 00:11:23.787 "base_bdevs_list": [ 00:11:23.787 { 00:11:23.787 "name": "BaseBdev1", 00:11:23.787 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:23.787 "is_configured": true, 00:11:23.787 "data_offset": 2048, 00:11:23.787 "data_size": 63488 00:11:23.787 }, 00:11:23.787 { 00:11:23.787 "name": "BaseBdev2", 00:11:23.787 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:23.787 "is_configured": true, 00:11:23.787 "data_offset": 2048, 00:11:23.787 "data_size": 63488 00:11:23.787 }, 00:11:23.787 { 00:11:23.787 "name": "BaseBdev3", 00:11:23.787 "uuid": "53b40d69-254b-465e-95a1-371a64f5d0ee", 00:11:23.787 "is_configured": true, 00:11:23.787 "data_offset": 2048, 00:11:23.787 "data_size": 63488 00:11:23.787 }, 00:11:23.787 { 00:11:23.787 "name": "BaseBdev4", 00:11:23.787 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:23.787 "is_configured": false, 00:11:23.787 "data_offset": 0, 00:11:23.787 "data_size": 0 00:11:23.787 } 00:11:23.787 ] 00:11:23.787 }' 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:23.787 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 [2024-12-12 05:49:31.741902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:24.357 [2024-12-12 05:49:31.742291] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:24.357 [2024-12-12 05:49:31.742354] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:24.357 [2024-12-12 05:49:31.742693] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:24.357 BaseBdev4 00:11:24.357 [2024-12-12 05:49:31.742908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:24.357 [2024-12-12 05:49:31.742923] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:24.357 [2024-12-12 05:49:31.743066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 [ 00:11:24.357 { 00:11:24.357 "name": "BaseBdev4", 00:11:24.357 "aliases": [ 00:11:24.357 "9bccd4c7-0963-4b46-9843-5c7ad60f6510" 00:11:24.357 ], 00:11:24.357 "product_name": "Malloc disk", 00:11:24.357 "block_size": 512, 00:11:24.357 "num_blocks": 65536, 00:11:24.357 "uuid": "9bccd4c7-0963-4b46-9843-5c7ad60f6510", 00:11:24.357 "assigned_rate_limits": { 00:11:24.357 "rw_ios_per_sec": 0, 00:11:24.357 "rw_mbytes_per_sec": 0, 00:11:24.357 "r_mbytes_per_sec": 0, 00:11:24.357 "w_mbytes_per_sec": 0 00:11:24.357 }, 00:11:24.357 "claimed": true, 00:11:24.357 "claim_type": "exclusive_write", 00:11:24.357 "zoned": false, 00:11:24.357 "supported_io_types": { 00:11:24.357 "read": true, 00:11:24.357 "write": true, 00:11:24.357 "unmap": true, 00:11:24.357 "flush": true, 00:11:24.357 "reset": true, 00:11:24.357 "nvme_admin": false, 00:11:24.357 "nvme_io": false, 00:11:24.357 "nvme_io_md": false, 00:11:24.357 "write_zeroes": true, 00:11:24.357 "zcopy": true, 00:11:24.357 "get_zone_info": false, 00:11:24.357 "zone_management": false, 00:11:24.357 "zone_append": false, 00:11:24.357 "compare": false, 00:11:24.357 "compare_and_write": false, 00:11:24.357 "abort": true, 00:11:24.357 "seek_hole": false, 00:11:24.357 "seek_data": false, 00:11:24.357 "copy": true, 00:11:24.357 "nvme_iov_md": false 00:11:24.357 }, 00:11:24.357 "memory_domains": [ 00:11:24.357 { 00:11:24.357 "dma_device_id": "system", 00:11:24.357 "dma_device_type": 1 00:11:24.357 }, 00:11:24.357 { 00:11:24.357 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.357 "dma_device_type": 2 00:11:24.357 } 00:11:24.357 ], 00:11:24.357 "driver_specific": {} 00:11:24.357 } 00:11:24.357 ] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:24.357 "name": "Existed_Raid", 00:11:24.357 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:24.357 "strip_size_kb": 64, 00:11:24.357 "state": "online", 00:11:24.357 "raid_level": "concat", 00:11:24.357 "superblock": true, 00:11:24.357 "num_base_bdevs": 4, 00:11:24.357 "num_base_bdevs_discovered": 4, 00:11:24.357 "num_base_bdevs_operational": 4, 00:11:24.357 "base_bdevs_list": [ 00:11:24.357 { 00:11:24.357 "name": "BaseBdev1", 00:11:24.357 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:24.357 "is_configured": true, 00:11:24.357 "data_offset": 2048, 00:11:24.357 "data_size": 63488 00:11:24.357 }, 00:11:24.357 { 00:11:24.357 "name": "BaseBdev2", 00:11:24.357 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:24.357 "is_configured": true, 00:11:24.357 "data_offset": 2048, 00:11:24.357 "data_size": 63488 00:11:24.357 }, 00:11:24.357 { 00:11:24.357 "name": "BaseBdev3", 00:11:24.357 "uuid": "53b40d69-254b-465e-95a1-371a64f5d0ee", 00:11:24.357 "is_configured": true, 00:11:24.357 "data_offset": 2048, 00:11:24.357 "data_size": 63488 00:11:24.357 }, 00:11:24.357 { 00:11:24.357 "name": "BaseBdev4", 00:11:24.357 "uuid": "9bccd4c7-0963-4b46-9843-5c7ad60f6510", 00:11:24.357 "is_configured": true, 00:11:24.357 "data_offset": 2048, 00:11:24.357 "data_size": 63488 00:11:24.357 } 00:11:24.357 ] 00:11:24.357 }' 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:24.357 05:49:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.925 [2024-12-12 05:49:32.229445] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:24.925 "name": "Existed_Raid", 00:11:24.925 "aliases": [ 00:11:24.925 "6303af2c-fe60-4a9f-9600-7bf6f5359e75" 00:11:24.925 ], 00:11:24.925 "product_name": "Raid Volume", 00:11:24.925 "block_size": 512, 00:11:24.925 "num_blocks": 253952, 00:11:24.925 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:24.925 "assigned_rate_limits": { 00:11:24.925 "rw_ios_per_sec": 0, 00:11:24.925 "rw_mbytes_per_sec": 0, 00:11:24.925 "r_mbytes_per_sec": 0, 00:11:24.925 "w_mbytes_per_sec": 0 00:11:24.925 }, 00:11:24.925 "claimed": false, 00:11:24.925 "zoned": false, 00:11:24.925 "supported_io_types": { 00:11:24.925 "read": true, 00:11:24.925 "write": true, 00:11:24.925 "unmap": true, 00:11:24.925 "flush": true, 00:11:24.925 "reset": true, 00:11:24.925 "nvme_admin": false, 00:11:24.925 "nvme_io": false, 00:11:24.925 "nvme_io_md": false, 00:11:24.925 "write_zeroes": true, 00:11:24.925 "zcopy": false, 00:11:24.925 "get_zone_info": false, 00:11:24.925 "zone_management": false, 00:11:24.925 "zone_append": false, 00:11:24.925 "compare": false, 00:11:24.925 "compare_and_write": false, 00:11:24.925 "abort": false, 00:11:24.925 "seek_hole": false, 00:11:24.925 "seek_data": false, 00:11:24.925 "copy": false, 00:11:24.925 "nvme_iov_md": false 00:11:24.925 }, 00:11:24.925 "memory_domains": [ 00:11:24.925 { 00:11:24.925 "dma_device_id": "system", 00:11:24.925 "dma_device_type": 1 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.925 "dma_device_type": 2 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "system", 00:11:24.925 "dma_device_type": 1 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.925 "dma_device_type": 2 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "system", 00:11:24.925 "dma_device_type": 1 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.925 "dma_device_type": 2 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "system", 00:11:24.925 "dma_device_type": 1 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:24.925 "dma_device_type": 2 00:11:24.925 } 00:11:24.925 ], 00:11:24.925 "driver_specific": { 00:11:24.925 "raid": { 00:11:24.925 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:24.925 "strip_size_kb": 64, 00:11:24.925 "state": "online", 00:11:24.925 "raid_level": "concat", 00:11:24.925 "superblock": true, 00:11:24.925 "num_base_bdevs": 4, 00:11:24.925 "num_base_bdevs_discovered": 4, 00:11:24.925 "num_base_bdevs_operational": 4, 00:11:24.925 "base_bdevs_list": [ 00:11:24.925 { 00:11:24.925 "name": "BaseBdev1", 00:11:24.925 "uuid": "5bf3155c-41f1-40b5-a84d-1b88e42ff2a9", 00:11:24.925 "is_configured": true, 00:11:24.925 "data_offset": 2048, 00:11:24.925 "data_size": 63488 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "name": "BaseBdev2", 00:11:24.925 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:24.925 "is_configured": true, 00:11:24.925 "data_offset": 2048, 00:11:24.925 "data_size": 63488 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "name": "BaseBdev3", 00:11:24.925 "uuid": "53b40d69-254b-465e-95a1-371a64f5d0ee", 00:11:24.925 "is_configured": true, 00:11:24.925 "data_offset": 2048, 00:11:24.925 "data_size": 63488 00:11:24.925 }, 00:11:24.925 { 00:11:24.925 "name": "BaseBdev4", 00:11:24.925 "uuid": "9bccd4c7-0963-4b46-9843-5c7ad60f6510", 00:11:24.925 "is_configured": true, 00:11:24.925 "data_offset": 2048, 00:11:24.925 "data_size": 63488 00:11:24.925 } 00:11:24.925 ] 00:11:24.925 } 00:11:24.925 } 00:11:24.925 }' 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:24.925 BaseBdev2 00:11:24.925 BaseBdev3 00:11:24.925 BaseBdev4' 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.925 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.926 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 [2024-12-12 05:49:32.532643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:25.185 [2024-12-12 05:49:32.532712] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:25.185 [2024-12-12 05:49:32.532784] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.185 "name": "Existed_Raid", 00:11:25.185 "uuid": "6303af2c-fe60-4a9f-9600-7bf6f5359e75", 00:11:25.185 "strip_size_kb": 64, 00:11:25.185 "state": "offline", 00:11:25.185 "raid_level": "concat", 00:11:25.185 "superblock": true, 00:11:25.185 "num_base_bdevs": 4, 00:11:25.185 "num_base_bdevs_discovered": 3, 00:11:25.185 "num_base_bdevs_operational": 3, 00:11:25.185 "base_bdevs_list": [ 00:11:25.185 { 00:11:25.185 "name": null, 00:11:25.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.185 "is_configured": false, 00:11:25.185 "data_offset": 0, 00:11:25.185 "data_size": 63488 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": "BaseBdev2", 00:11:25.185 "uuid": "d2216823-44ee-4eb3-bfee-f7e5533d8ff4", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 2048, 00:11:25.185 "data_size": 63488 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": "BaseBdev3", 00:11:25.185 "uuid": "53b40d69-254b-465e-95a1-371a64f5d0ee", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 2048, 00:11:25.185 "data_size": 63488 00:11:25.185 }, 00:11:25.185 { 00:11:25.185 "name": "BaseBdev4", 00:11:25.185 "uuid": "9bccd4c7-0963-4b46-9843-5c7ad60f6510", 00:11:25.185 "is_configured": true, 00:11:25.185 "data_offset": 2048, 00:11:25.185 "data_size": 63488 00:11:25.185 } 00:11:25.185 ] 00:11:25.185 }' 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.185 05:49:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.752 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.753 [2024-12-12 05:49:33.044288] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.753 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.753 [2024-12-12 05:49:33.197234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.011 [2024-12-12 05:49:33.335892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:26.011 [2024-12-12 05:49:33.335984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.011 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 BaseBdev2 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.012 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 [ 00:11:26.271 { 00:11:26.271 "name": "BaseBdev2", 00:11:26.271 "aliases": [ 00:11:26.271 "990be8a3-3a4a-4960-8f21-99642a31506e" 00:11:26.271 ], 00:11:26.271 "product_name": "Malloc disk", 00:11:26.271 "block_size": 512, 00:11:26.271 "num_blocks": 65536, 00:11:26.271 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:26.271 "assigned_rate_limits": { 00:11:26.271 "rw_ios_per_sec": 0, 00:11:26.271 "rw_mbytes_per_sec": 0, 00:11:26.271 "r_mbytes_per_sec": 0, 00:11:26.271 "w_mbytes_per_sec": 0 00:11:26.271 }, 00:11:26.271 "claimed": false, 00:11:26.271 "zoned": false, 00:11:26.271 "supported_io_types": { 00:11:26.271 "read": true, 00:11:26.271 "write": true, 00:11:26.271 "unmap": true, 00:11:26.271 "flush": true, 00:11:26.271 "reset": true, 00:11:26.271 "nvme_admin": false, 00:11:26.271 "nvme_io": false, 00:11:26.271 "nvme_io_md": false, 00:11:26.271 "write_zeroes": true, 00:11:26.271 "zcopy": true, 00:11:26.271 "get_zone_info": false, 00:11:26.271 "zone_management": false, 00:11:26.271 "zone_append": false, 00:11:26.271 "compare": false, 00:11:26.271 "compare_and_write": false, 00:11:26.271 "abort": true, 00:11:26.271 "seek_hole": false, 00:11:26.271 "seek_data": false, 00:11:26.271 "copy": true, 00:11:26.271 "nvme_iov_md": false 00:11:26.271 }, 00:11:26.271 "memory_domains": [ 00:11:26.271 { 00:11:26.271 "dma_device_id": "system", 00:11:26.271 "dma_device_type": 1 00:11:26.271 }, 00:11:26.271 { 00:11:26.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.271 "dma_device_type": 2 00:11:26.271 } 00:11:26.271 ], 00:11:26.271 "driver_specific": {} 00:11:26.271 } 00:11:26.271 ] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 BaseBdev3 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 [ 00:11:26.271 { 00:11:26.271 "name": "BaseBdev3", 00:11:26.271 "aliases": [ 00:11:26.271 "9c7ac752-7083-4a7e-94b7-2ba69b39cae3" 00:11:26.271 ], 00:11:26.271 "product_name": "Malloc disk", 00:11:26.271 "block_size": 512, 00:11:26.271 "num_blocks": 65536, 00:11:26.271 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:26.271 "assigned_rate_limits": { 00:11:26.271 "rw_ios_per_sec": 0, 00:11:26.271 "rw_mbytes_per_sec": 0, 00:11:26.271 "r_mbytes_per_sec": 0, 00:11:26.271 "w_mbytes_per_sec": 0 00:11:26.271 }, 00:11:26.271 "claimed": false, 00:11:26.271 "zoned": false, 00:11:26.271 "supported_io_types": { 00:11:26.271 "read": true, 00:11:26.271 "write": true, 00:11:26.271 "unmap": true, 00:11:26.271 "flush": true, 00:11:26.271 "reset": true, 00:11:26.271 "nvme_admin": false, 00:11:26.271 "nvme_io": false, 00:11:26.271 "nvme_io_md": false, 00:11:26.271 "write_zeroes": true, 00:11:26.271 "zcopy": true, 00:11:26.271 "get_zone_info": false, 00:11:26.271 "zone_management": false, 00:11:26.271 "zone_append": false, 00:11:26.271 "compare": false, 00:11:26.271 "compare_and_write": false, 00:11:26.271 "abort": true, 00:11:26.271 "seek_hole": false, 00:11:26.271 "seek_data": false, 00:11:26.271 "copy": true, 00:11:26.271 "nvme_iov_md": false 00:11:26.271 }, 00:11:26.271 "memory_domains": [ 00:11:26.271 { 00:11:26.271 "dma_device_id": "system", 00:11:26.271 "dma_device_type": 1 00:11:26.271 }, 00:11:26.271 { 00:11:26.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.271 "dma_device_type": 2 00:11:26.271 } 00:11:26.271 ], 00:11:26.271 "driver_specific": {} 00:11:26.271 } 00:11:26.271 ] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.271 BaseBdev4 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.271 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.272 [ 00:11:26.272 { 00:11:26.272 "name": "BaseBdev4", 00:11:26.272 "aliases": [ 00:11:26.272 "33b21d83-0d60-4ab9-800b-1a89cd6f711e" 00:11:26.272 ], 00:11:26.272 "product_name": "Malloc disk", 00:11:26.272 "block_size": 512, 00:11:26.272 "num_blocks": 65536, 00:11:26.272 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:26.272 "assigned_rate_limits": { 00:11:26.272 "rw_ios_per_sec": 0, 00:11:26.272 "rw_mbytes_per_sec": 0, 00:11:26.272 "r_mbytes_per_sec": 0, 00:11:26.272 "w_mbytes_per_sec": 0 00:11:26.272 }, 00:11:26.272 "claimed": false, 00:11:26.272 "zoned": false, 00:11:26.272 "supported_io_types": { 00:11:26.272 "read": true, 00:11:26.272 "write": true, 00:11:26.272 "unmap": true, 00:11:26.272 "flush": true, 00:11:26.272 "reset": true, 00:11:26.272 "nvme_admin": false, 00:11:26.272 "nvme_io": false, 00:11:26.272 "nvme_io_md": false, 00:11:26.272 "write_zeroes": true, 00:11:26.272 "zcopy": true, 00:11:26.272 "get_zone_info": false, 00:11:26.272 "zone_management": false, 00:11:26.272 "zone_append": false, 00:11:26.272 "compare": false, 00:11:26.272 "compare_and_write": false, 00:11:26.272 "abort": true, 00:11:26.272 "seek_hole": false, 00:11:26.272 "seek_data": false, 00:11:26.272 "copy": true, 00:11:26.272 "nvme_iov_md": false 00:11:26.272 }, 00:11:26.272 "memory_domains": [ 00:11:26.272 { 00:11:26.272 "dma_device_id": "system", 00:11:26.272 "dma_device_type": 1 00:11:26.272 }, 00:11:26.272 { 00:11:26.272 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.272 "dma_device_type": 2 00:11:26.272 } 00:11:26.272 ], 00:11:26.272 "driver_specific": {} 00:11:26.272 } 00:11:26.272 ] 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.272 [2024-12-12 05:49:33.720235] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.272 [2024-12-12 05:49:33.720337] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.272 [2024-12-12 05:49:33.720378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:26.272 [2024-12-12 05:49:33.722164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:26.272 [2024-12-12 05:49:33.722251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.272 "name": "Existed_Raid", 00:11:26.272 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:26.272 "strip_size_kb": 64, 00:11:26.272 "state": "configuring", 00:11:26.272 "raid_level": "concat", 00:11:26.272 "superblock": true, 00:11:26.272 "num_base_bdevs": 4, 00:11:26.272 "num_base_bdevs_discovered": 3, 00:11:26.272 "num_base_bdevs_operational": 4, 00:11:26.272 "base_bdevs_list": [ 00:11:26.272 { 00:11:26.272 "name": "BaseBdev1", 00:11:26.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.272 "is_configured": false, 00:11:26.272 "data_offset": 0, 00:11:26.272 "data_size": 0 00:11:26.272 }, 00:11:26.272 { 00:11:26.272 "name": "BaseBdev2", 00:11:26.272 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:26.272 "is_configured": true, 00:11:26.272 "data_offset": 2048, 00:11:26.272 "data_size": 63488 00:11:26.272 }, 00:11:26.272 { 00:11:26.272 "name": "BaseBdev3", 00:11:26.272 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:26.272 "is_configured": true, 00:11:26.272 "data_offset": 2048, 00:11:26.272 "data_size": 63488 00:11:26.272 }, 00:11:26.272 { 00:11:26.272 "name": "BaseBdev4", 00:11:26.272 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:26.272 "is_configured": true, 00:11:26.272 "data_offset": 2048, 00:11:26.272 "data_size": 63488 00:11:26.272 } 00:11:26.272 ] 00:11:26.272 }' 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.272 05:49:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 [2024-12-12 05:49:34.183450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.839 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.840 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.840 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.840 "name": "Existed_Raid", 00:11:26.840 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:26.840 "strip_size_kb": 64, 00:11:26.840 "state": "configuring", 00:11:26.840 "raid_level": "concat", 00:11:26.840 "superblock": true, 00:11:26.840 "num_base_bdevs": 4, 00:11:26.840 "num_base_bdevs_discovered": 2, 00:11:26.840 "num_base_bdevs_operational": 4, 00:11:26.840 "base_bdevs_list": [ 00:11:26.840 { 00:11:26.840 "name": "BaseBdev1", 00:11:26.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.840 "is_configured": false, 00:11:26.840 "data_offset": 0, 00:11:26.840 "data_size": 0 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": null, 00:11:26.840 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:26.840 "is_configured": false, 00:11:26.840 "data_offset": 0, 00:11:26.840 "data_size": 63488 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": "BaseBdev3", 00:11:26.840 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:26.840 "is_configured": true, 00:11:26.840 "data_offset": 2048, 00:11:26.840 "data_size": 63488 00:11:26.840 }, 00:11:26.840 { 00:11:26.840 "name": "BaseBdev4", 00:11:26.840 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:26.840 "is_configured": true, 00:11:26.840 "data_offset": 2048, 00:11:26.840 "data_size": 63488 00:11:26.840 } 00:11:26.840 ] 00:11:26.840 }' 00:11:26.840 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.840 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 [2024-12-12 05:49:34.730495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:27.408 BaseBdev1 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 [ 00:11:27.408 { 00:11:27.408 "name": "BaseBdev1", 00:11:27.408 "aliases": [ 00:11:27.408 "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8" 00:11:27.408 ], 00:11:27.408 "product_name": "Malloc disk", 00:11:27.408 "block_size": 512, 00:11:27.408 "num_blocks": 65536, 00:11:27.408 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:27.408 "assigned_rate_limits": { 00:11:27.408 "rw_ios_per_sec": 0, 00:11:27.408 "rw_mbytes_per_sec": 0, 00:11:27.408 "r_mbytes_per_sec": 0, 00:11:27.408 "w_mbytes_per_sec": 0 00:11:27.408 }, 00:11:27.408 "claimed": true, 00:11:27.408 "claim_type": "exclusive_write", 00:11:27.408 "zoned": false, 00:11:27.408 "supported_io_types": { 00:11:27.408 "read": true, 00:11:27.408 "write": true, 00:11:27.408 "unmap": true, 00:11:27.408 "flush": true, 00:11:27.408 "reset": true, 00:11:27.408 "nvme_admin": false, 00:11:27.408 "nvme_io": false, 00:11:27.408 "nvme_io_md": false, 00:11:27.408 "write_zeroes": true, 00:11:27.408 "zcopy": true, 00:11:27.408 "get_zone_info": false, 00:11:27.408 "zone_management": false, 00:11:27.408 "zone_append": false, 00:11:27.408 "compare": false, 00:11:27.408 "compare_and_write": false, 00:11:27.408 "abort": true, 00:11:27.408 "seek_hole": false, 00:11:27.408 "seek_data": false, 00:11:27.408 "copy": true, 00:11:27.408 "nvme_iov_md": false 00:11:27.408 }, 00:11:27.408 "memory_domains": [ 00:11:27.408 { 00:11:27.408 "dma_device_id": "system", 00:11:27.408 "dma_device_type": 1 00:11:27.408 }, 00:11:27.408 { 00:11:27.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.408 "dma_device_type": 2 00:11:27.408 } 00:11:27.408 ], 00:11:27.408 "driver_specific": {} 00:11:27.408 } 00:11:27.408 ] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.408 "name": "Existed_Raid", 00:11:27.408 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:27.408 "strip_size_kb": 64, 00:11:27.408 "state": "configuring", 00:11:27.408 "raid_level": "concat", 00:11:27.408 "superblock": true, 00:11:27.408 "num_base_bdevs": 4, 00:11:27.408 "num_base_bdevs_discovered": 3, 00:11:27.408 "num_base_bdevs_operational": 4, 00:11:27.408 "base_bdevs_list": [ 00:11:27.408 { 00:11:27.408 "name": "BaseBdev1", 00:11:27.408 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:27.408 "is_configured": true, 00:11:27.408 "data_offset": 2048, 00:11:27.408 "data_size": 63488 00:11:27.408 }, 00:11:27.408 { 00:11:27.408 "name": null, 00:11:27.408 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:27.408 "is_configured": false, 00:11:27.408 "data_offset": 0, 00:11:27.408 "data_size": 63488 00:11:27.408 }, 00:11:27.408 { 00:11:27.408 "name": "BaseBdev3", 00:11:27.408 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:27.408 "is_configured": true, 00:11:27.408 "data_offset": 2048, 00:11:27.408 "data_size": 63488 00:11:27.408 }, 00:11:27.408 { 00:11:27.408 "name": "BaseBdev4", 00:11:27.408 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:27.408 "is_configured": true, 00:11:27.408 "data_offset": 2048, 00:11:27.408 "data_size": 63488 00:11:27.408 } 00:11:27.408 ] 00:11:27.408 }' 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.408 05:49:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.669 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.669 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.669 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.669 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 [2024-12-12 05:49:35.229722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.929 "name": "Existed_Raid", 00:11:27.929 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:27.929 "strip_size_kb": 64, 00:11:27.929 "state": "configuring", 00:11:27.929 "raid_level": "concat", 00:11:27.929 "superblock": true, 00:11:27.929 "num_base_bdevs": 4, 00:11:27.929 "num_base_bdevs_discovered": 2, 00:11:27.929 "num_base_bdevs_operational": 4, 00:11:27.929 "base_bdevs_list": [ 00:11:27.929 { 00:11:27.929 "name": "BaseBdev1", 00:11:27.929 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:27.929 "is_configured": true, 00:11:27.929 "data_offset": 2048, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": null, 00:11:27.929 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:27.929 "is_configured": false, 00:11:27.929 "data_offset": 0, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": null, 00:11:27.929 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:27.929 "is_configured": false, 00:11:27.929 "data_offset": 0, 00:11:27.929 "data_size": 63488 00:11:27.929 }, 00:11:27.929 { 00:11:27.929 "name": "BaseBdev4", 00:11:27.929 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:27.929 "is_configured": true, 00:11:27.929 "data_offset": 2048, 00:11:27.929 "data_size": 63488 00:11:27.929 } 00:11:27.929 ] 00:11:27.929 }' 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.929 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.188 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.188 [2024-12-12 05:49:35.708896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.447 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.447 "name": "Existed_Raid", 00:11:28.447 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:28.447 "strip_size_kb": 64, 00:11:28.447 "state": "configuring", 00:11:28.447 "raid_level": "concat", 00:11:28.447 "superblock": true, 00:11:28.447 "num_base_bdevs": 4, 00:11:28.447 "num_base_bdevs_discovered": 3, 00:11:28.448 "num_base_bdevs_operational": 4, 00:11:28.448 "base_bdevs_list": [ 00:11:28.448 { 00:11:28.448 "name": "BaseBdev1", 00:11:28.448 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:28.448 "is_configured": true, 00:11:28.448 "data_offset": 2048, 00:11:28.448 "data_size": 63488 00:11:28.448 }, 00:11:28.448 { 00:11:28.448 "name": null, 00:11:28.448 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:28.448 "is_configured": false, 00:11:28.448 "data_offset": 0, 00:11:28.448 "data_size": 63488 00:11:28.448 }, 00:11:28.448 { 00:11:28.448 "name": "BaseBdev3", 00:11:28.448 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:28.448 "is_configured": true, 00:11:28.448 "data_offset": 2048, 00:11:28.448 "data_size": 63488 00:11:28.448 }, 00:11:28.448 { 00:11:28.448 "name": "BaseBdev4", 00:11:28.448 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:28.448 "is_configured": true, 00:11:28.448 "data_offset": 2048, 00:11:28.448 "data_size": 63488 00:11:28.448 } 00:11:28.448 ] 00:11:28.448 }' 00:11:28.448 05:49:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.448 05:49:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.707 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.707 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:28.707 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.707 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.707 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 [2024-12-12 05:49:36.236033] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.966 "name": "Existed_Raid", 00:11:28.966 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:28.966 "strip_size_kb": 64, 00:11:28.966 "state": "configuring", 00:11:28.966 "raid_level": "concat", 00:11:28.966 "superblock": true, 00:11:28.966 "num_base_bdevs": 4, 00:11:28.966 "num_base_bdevs_discovered": 2, 00:11:28.966 "num_base_bdevs_operational": 4, 00:11:28.966 "base_bdevs_list": [ 00:11:28.966 { 00:11:28.966 "name": null, 00:11:28.966 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:28.966 "is_configured": false, 00:11:28.966 "data_offset": 0, 00:11:28.966 "data_size": 63488 00:11:28.966 }, 00:11:28.966 { 00:11:28.966 "name": null, 00:11:28.966 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:28.966 "is_configured": false, 00:11:28.966 "data_offset": 0, 00:11:28.966 "data_size": 63488 00:11:28.966 }, 00:11:28.966 { 00:11:28.966 "name": "BaseBdev3", 00:11:28.966 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:28.966 "is_configured": true, 00:11:28.966 "data_offset": 2048, 00:11:28.966 "data_size": 63488 00:11:28.966 }, 00:11:28.966 { 00:11:28.966 "name": "BaseBdev4", 00:11:28.966 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:28.966 "is_configured": true, 00:11:28.966 "data_offset": 2048, 00:11:28.966 "data_size": 63488 00:11:28.966 } 00:11:28.966 ] 00:11:28.966 }' 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.966 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.537 [2024-12-12 05:49:36.867563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.537 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.538 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.538 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.538 "name": "Existed_Raid", 00:11:29.538 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:29.538 "strip_size_kb": 64, 00:11:29.538 "state": "configuring", 00:11:29.538 "raid_level": "concat", 00:11:29.538 "superblock": true, 00:11:29.538 "num_base_bdevs": 4, 00:11:29.538 "num_base_bdevs_discovered": 3, 00:11:29.538 "num_base_bdevs_operational": 4, 00:11:29.538 "base_bdevs_list": [ 00:11:29.538 { 00:11:29.538 "name": null, 00:11:29.538 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:29.538 "is_configured": false, 00:11:29.538 "data_offset": 0, 00:11:29.538 "data_size": 63488 00:11:29.538 }, 00:11:29.538 { 00:11:29.538 "name": "BaseBdev2", 00:11:29.538 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:29.538 "is_configured": true, 00:11:29.538 "data_offset": 2048, 00:11:29.538 "data_size": 63488 00:11:29.538 }, 00:11:29.538 { 00:11:29.538 "name": "BaseBdev3", 00:11:29.538 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:29.538 "is_configured": true, 00:11:29.538 "data_offset": 2048, 00:11:29.538 "data_size": 63488 00:11:29.538 }, 00:11:29.538 { 00:11:29.538 "name": "BaseBdev4", 00:11:29.538 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:29.538 "is_configured": true, 00:11:29.538 "data_offset": 2048, 00:11:29.538 "data_size": 63488 00:11:29.538 } 00:11:29.538 ] 00:11:29.538 }' 00:11:29.538 05:49:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.538 05:49:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.797 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.797 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.797 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.797 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dd94dbe4-7fe1-45ee-ac5b-4230096b43f8 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.057 [2024-12-12 05:49:37.438545] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:30.057 [2024-12-12 05:49:37.438892] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:30.057 [2024-12-12 05:49:37.438941] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:30.057 [2024-12-12 05:49:37.439262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:30.057 NewBaseBdev 00:11:30.057 [2024-12-12 05:49:37.439468] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:30.057 [2024-12-12 05:49:37.439483] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:30.057 [2024-12-12 05:49:37.439647] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.057 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.057 [ 00:11:30.057 { 00:11:30.057 "name": "NewBaseBdev", 00:11:30.057 "aliases": [ 00:11:30.057 "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8" 00:11:30.057 ], 00:11:30.057 "product_name": "Malloc disk", 00:11:30.057 "block_size": 512, 00:11:30.057 "num_blocks": 65536, 00:11:30.057 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:30.057 "assigned_rate_limits": { 00:11:30.057 "rw_ios_per_sec": 0, 00:11:30.057 "rw_mbytes_per_sec": 0, 00:11:30.057 "r_mbytes_per_sec": 0, 00:11:30.057 "w_mbytes_per_sec": 0 00:11:30.057 }, 00:11:30.057 "claimed": true, 00:11:30.057 "claim_type": "exclusive_write", 00:11:30.057 "zoned": false, 00:11:30.057 "supported_io_types": { 00:11:30.057 "read": true, 00:11:30.057 "write": true, 00:11:30.057 "unmap": true, 00:11:30.057 "flush": true, 00:11:30.057 "reset": true, 00:11:30.057 "nvme_admin": false, 00:11:30.057 "nvme_io": false, 00:11:30.057 "nvme_io_md": false, 00:11:30.057 "write_zeroes": true, 00:11:30.057 "zcopy": true, 00:11:30.058 "get_zone_info": false, 00:11:30.058 "zone_management": false, 00:11:30.058 "zone_append": false, 00:11:30.058 "compare": false, 00:11:30.058 "compare_and_write": false, 00:11:30.058 "abort": true, 00:11:30.058 "seek_hole": false, 00:11:30.058 "seek_data": false, 00:11:30.058 "copy": true, 00:11:30.058 "nvme_iov_md": false 00:11:30.058 }, 00:11:30.058 "memory_domains": [ 00:11:30.058 { 00:11:30.058 "dma_device_id": "system", 00:11:30.058 "dma_device_type": 1 00:11:30.058 }, 00:11:30.058 { 00:11:30.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.058 "dma_device_type": 2 00:11:30.058 } 00:11:30.058 ], 00:11:30.058 "driver_specific": {} 00:11:30.058 } 00:11:30.058 ] 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.058 "name": "Existed_Raid", 00:11:30.058 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:30.058 "strip_size_kb": 64, 00:11:30.058 "state": "online", 00:11:30.058 "raid_level": "concat", 00:11:30.058 "superblock": true, 00:11:30.058 "num_base_bdevs": 4, 00:11:30.058 "num_base_bdevs_discovered": 4, 00:11:30.058 "num_base_bdevs_operational": 4, 00:11:30.058 "base_bdevs_list": [ 00:11:30.058 { 00:11:30.058 "name": "NewBaseBdev", 00:11:30.058 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:30.058 "is_configured": true, 00:11:30.058 "data_offset": 2048, 00:11:30.058 "data_size": 63488 00:11:30.058 }, 00:11:30.058 { 00:11:30.058 "name": "BaseBdev2", 00:11:30.058 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:30.058 "is_configured": true, 00:11:30.058 "data_offset": 2048, 00:11:30.058 "data_size": 63488 00:11:30.058 }, 00:11:30.058 { 00:11:30.058 "name": "BaseBdev3", 00:11:30.058 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:30.058 "is_configured": true, 00:11:30.058 "data_offset": 2048, 00:11:30.058 "data_size": 63488 00:11:30.058 }, 00:11:30.058 { 00:11:30.058 "name": "BaseBdev4", 00:11:30.058 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:30.058 "is_configured": true, 00:11:30.058 "data_offset": 2048, 00:11:30.058 "data_size": 63488 00:11:30.058 } 00:11:30.058 ] 00:11:30.058 }' 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.058 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:30.627 [2024-12-12 05:49:37.926098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.627 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:30.627 "name": "Existed_Raid", 00:11:30.627 "aliases": [ 00:11:30.627 "7f02e1e5-1291-416f-b5d1-8d349ada24f5" 00:11:30.627 ], 00:11:30.627 "product_name": "Raid Volume", 00:11:30.627 "block_size": 512, 00:11:30.627 "num_blocks": 253952, 00:11:30.627 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:30.627 "assigned_rate_limits": { 00:11:30.627 "rw_ios_per_sec": 0, 00:11:30.627 "rw_mbytes_per_sec": 0, 00:11:30.627 "r_mbytes_per_sec": 0, 00:11:30.627 "w_mbytes_per_sec": 0 00:11:30.627 }, 00:11:30.627 "claimed": false, 00:11:30.627 "zoned": false, 00:11:30.627 "supported_io_types": { 00:11:30.627 "read": true, 00:11:30.627 "write": true, 00:11:30.627 "unmap": true, 00:11:30.627 "flush": true, 00:11:30.627 "reset": true, 00:11:30.627 "nvme_admin": false, 00:11:30.627 "nvme_io": false, 00:11:30.627 "nvme_io_md": false, 00:11:30.627 "write_zeroes": true, 00:11:30.627 "zcopy": false, 00:11:30.627 "get_zone_info": false, 00:11:30.627 "zone_management": false, 00:11:30.627 "zone_append": false, 00:11:30.627 "compare": false, 00:11:30.627 "compare_and_write": false, 00:11:30.627 "abort": false, 00:11:30.627 "seek_hole": false, 00:11:30.627 "seek_data": false, 00:11:30.627 "copy": false, 00:11:30.627 "nvme_iov_md": false 00:11:30.627 }, 00:11:30.627 "memory_domains": [ 00:11:30.627 { 00:11:30.627 "dma_device_id": "system", 00:11:30.627 "dma_device_type": 1 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.627 "dma_device_type": 2 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "system", 00:11:30.627 "dma_device_type": 1 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.627 "dma_device_type": 2 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "system", 00:11:30.627 "dma_device_type": 1 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.627 "dma_device_type": 2 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "system", 00:11:30.627 "dma_device_type": 1 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.627 "dma_device_type": 2 00:11:30.627 } 00:11:30.627 ], 00:11:30.627 "driver_specific": { 00:11:30.627 "raid": { 00:11:30.627 "uuid": "7f02e1e5-1291-416f-b5d1-8d349ada24f5", 00:11:30.627 "strip_size_kb": 64, 00:11:30.627 "state": "online", 00:11:30.627 "raid_level": "concat", 00:11:30.627 "superblock": true, 00:11:30.627 "num_base_bdevs": 4, 00:11:30.627 "num_base_bdevs_discovered": 4, 00:11:30.627 "num_base_bdevs_operational": 4, 00:11:30.627 "base_bdevs_list": [ 00:11:30.627 { 00:11:30.627 "name": "NewBaseBdev", 00:11:30.627 "uuid": "dd94dbe4-7fe1-45ee-ac5b-4230096b43f8", 00:11:30.627 "is_configured": true, 00:11:30.627 "data_offset": 2048, 00:11:30.627 "data_size": 63488 00:11:30.627 }, 00:11:30.627 { 00:11:30.627 "name": "BaseBdev2", 00:11:30.627 "uuid": "990be8a3-3a4a-4960-8f21-99642a31506e", 00:11:30.627 "is_configured": true, 00:11:30.627 "data_offset": 2048, 00:11:30.627 "data_size": 63488 00:11:30.627 }, 00:11:30.628 { 00:11:30.628 "name": "BaseBdev3", 00:11:30.628 "uuid": "9c7ac752-7083-4a7e-94b7-2ba69b39cae3", 00:11:30.628 "is_configured": true, 00:11:30.628 "data_offset": 2048, 00:11:30.628 "data_size": 63488 00:11:30.628 }, 00:11:30.628 { 00:11:30.628 "name": "BaseBdev4", 00:11:30.628 "uuid": "33b21d83-0d60-4ab9-800b-1a89cd6f711e", 00:11:30.628 "is_configured": true, 00:11:30.628 "data_offset": 2048, 00:11:30.628 "data_size": 63488 00:11:30.628 } 00:11:30.628 ] 00:11:30.628 } 00:11:30.628 } 00:11:30.628 }' 00:11:30.628 05:49:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:30.628 BaseBdev2 00:11:30.628 BaseBdev3 00:11:30.628 BaseBdev4' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.628 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.887 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.887 [2024-12-12 05:49:38.257140] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:30.888 [2024-12-12 05:49:38.257203] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:30.888 [2024-12-12 05:49:38.257316] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:30.888 [2024-12-12 05:49:38.257422] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:30.888 [2024-12-12 05:49:38.257469] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72854 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 72854 ']' 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 72854 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72854 00:11:30.888 killing process with pid 72854 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72854' 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 72854 00:11:30.888 [2024-12-12 05:49:38.302702] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:30.888 05:49:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 72854 00:11:31.455 [2024-12-12 05:49:38.686654] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.394 05:49:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:32.394 00:11:32.394 real 0m11.416s 00:11:32.394 user 0m18.190s 00:11:32.394 sys 0m2.064s 00:11:32.394 05:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:32.394 ************************************ 00:11:32.394 END TEST raid_state_function_test_sb 00:11:32.394 ************************************ 00:11:32.394 05:49:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 05:49:39 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:11:32.394 05:49:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:32.394 05:49:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:32.394 05:49:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.394 ************************************ 00:11:32.394 START TEST raid_superblock_test 00:11:32.394 ************************************ 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=73520 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 73520 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 73520 ']' 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:32.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:32.394 05:49:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.653 [2024-12-12 05:49:39.941074] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:32.653 [2024-12-12 05:49:39.941301] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73520 ] 00:11:32.653 [2024-12-12 05:49:40.114898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:32.942 [2024-12-12 05:49:40.223132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:32.942 [2024-12-12 05:49:40.417897] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:32.942 [2024-12-12 05:49:40.417975] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.527 malloc1 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.527 [2024-12-12 05:49:40.814801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:33.527 [2024-12-12 05:49:40.814923] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.527 [2024-12-12 05:49:40.814967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:33.527 [2024-12-12 05:49:40.814996] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.527 [2024-12-12 05:49:40.817158] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.527 [2024-12-12 05:49:40.817225] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:33.527 pt1 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.527 malloc2 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.527 [2024-12-12 05:49:40.873257] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:33.527 [2024-12-12 05:49:40.873370] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.527 [2024-12-12 05:49:40.873395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:33.527 [2024-12-12 05:49:40.873404] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.527 [2024-12-12 05:49:40.875450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.527 [2024-12-12 05:49:40.875488] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:33.527 pt2 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.527 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.527 malloc3 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.528 [2024-12-12 05:49:40.940669] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:33.528 [2024-12-12 05:49:40.940765] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.528 [2024-12-12 05:49:40.940820] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:33.528 [2024-12-12 05:49:40.940851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.528 [2024-12-12 05:49:40.942887] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.528 [2024-12-12 05:49:40.942958] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:33.528 pt3 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.528 malloc4 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.528 05:49:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.528 [2024-12-12 05:49:40.998976] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:33.528 [2024-12-12 05:49:40.999084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:33.528 [2024-12-12 05:49:40.999124] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:33.528 [2024-12-12 05:49:40.999154] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:33.528 [2024-12-12 05:49:41.001291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:33.528 [2024-12-12 05:49:41.001360] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:33.528 pt4 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.528 [2024-12-12 05:49:41.010988] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:33.528 [2024-12-12 05:49:41.012734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:33.528 [2024-12-12 05:49:41.012870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:33.528 [2024-12-12 05:49:41.012941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:33.528 [2024-12-12 05:49:41.013193] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:33.528 [2024-12-12 05:49:41.013238] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:33.528 [2024-12-12 05:49:41.013513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:33.528 [2024-12-12 05:49:41.013717] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:33.528 [2024-12-12 05:49:41.013763] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:33.528 [2024-12-12 05:49:41.013966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.528 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.787 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.787 "name": "raid_bdev1", 00:11:33.787 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:33.787 "strip_size_kb": 64, 00:11:33.787 "state": "online", 00:11:33.787 "raid_level": "concat", 00:11:33.787 "superblock": true, 00:11:33.787 "num_base_bdevs": 4, 00:11:33.787 "num_base_bdevs_discovered": 4, 00:11:33.787 "num_base_bdevs_operational": 4, 00:11:33.787 "base_bdevs_list": [ 00:11:33.787 { 00:11:33.787 "name": "pt1", 00:11:33.787 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:33.787 "is_configured": true, 00:11:33.787 "data_offset": 2048, 00:11:33.787 "data_size": 63488 00:11:33.787 }, 00:11:33.787 { 00:11:33.787 "name": "pt2", 00:11:33.787 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:33.787 "is_configured": true, 00:11:33.787 "data_offset": 2048, 00:11:33.787 "data_size": 63488 00:11:33.787 }, 00:11:33.787 { 00:11:33.787 "name": "pt3", 00:11:33.787 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:33.787 "is_configured": true, 00:11:33.787 "data_offset": 2048, 00:11:33.787 "data_size": 63488 00:11:33.787 }, 00:11:33.787 { 00:11:33.787 "name": "pt4", 00:11:33.787 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:33.787 "is_configured": true, 00:11:33.787 "data_offset": 2048, 00:11:33.787 "data_size": 63488 00:11:33.787 } 00:11:33.787 ] 00:11:33.787 }' 00:11:33.787 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.787 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.046 [2024-12-12 05:49:41.426610] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.046 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.046 "name": "raid_bdev1", 00:11:34.046 "aliases": [ 00:11:34.046 "827ea676-bb1d-4e84-a83f-16e75d523cc2" 00:11:34.046 ], 00:11:34.046 "product_name": "Raid Volume", 00:11:34.046 "block_size": 512, 00:11:34.046 "num_blocks": 253952, 00:11:34.046 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:34.046 "assigned_rate_limits": { 00:11:34.046 "rw_ios_per_sec": 0, 00:11:34.046 "rw_mbytes_per_sec": 0, 00:11:34.046 "r_mbytes_per_sec": 0, 00:11:34.046 "w_mbytes_per_sec": 0 00:11:34.046 }, 00:11:34.046 "claimed": false, 00:11:34.046 "zoned": false, 00:11:34.046 "supported_io_types": { 00:11:34.046 "read": true, 00:11:34.046 "write": true, 00:11:34.046 "unmap": true, 00:11:34.046 "flush": true, 00:11:34.046 "reset": true, 00:11:34.046 "nvme_admin": false, 00:11:34.046 "nvme_io": false, 00:11:34.046 "nvme_io_md": false, 00:11:34.046 "write_zeroes": true, 00:11:34.046 "zcopy": false, 00:11:34.046 "get_zone_info": false, 00:11:34.046 "zone_management": false, 00:11:34.046 "zone_append": false, 00:11:34.046 "compare": false, 00:11:34.046 "compare_and_write": false, 00:11:34.046 "abort": false, 00:11:34.046 "seek_hole": false, 00:11:34.046 "seek_data": false, 00:11:34.046 "copy": false, 00:11:34.046 "nvme_iov_md": false 00:11:34.046 }, 00:11:34.046 "memory_domains": [ 00:11:34.046 { 00:11:34.046 "dma_device_id": "system", 00:11:34.046 "dma_device_type": 1 00:11:34.046 }, 00:11:34.046 { 00:11:34.046 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.046 "dma_device_type": 2 00:11:34.046 }, 00:11:34.046 { 00:11:34.047 "dma_device_id": "system", 00:11:34.047 "dma_device_type": 1 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.047 "dma_device_type": 2 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "system", 00:11:34.047 "dma_device_type": 1 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.047 "dma_device_type": 2 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "system", 00:11:34.047 "dma_device_type": 1 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.047 "dma_device_type": 2 00:11:34.047 } 00:11:34.047 ], 00:11:34.047 "driver_specific": { 00:11:34.047 "raid": { 00:11:34.047 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:34.047 "strip_size_kb": 64, 00:11:34.047 "state": "online", 00:11:34.047 "raid_level": "concat", 00:11:34.047 "superblock": true, 00:11:34.047 "num_base_bdevs": 4, 00:11:34.047 "num_base_bdevs_discovered": 4, 00:11:34.047 "num_base_bdevs_operational": 4, 00:11:34.047 "base_bdevs_list": [ 00:11:34.047 { 00:11:34.047 "name": "pt1", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.047 "is_configured": true, 00:11:34.047 "data_offset": 2048, 00:11:34.047 "data_size": 63488 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "name": "pt2", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.047 "is_configured": true, 00:11:34.047 "data_offset": 2048, 00:11:34.047 "data_size": 63488 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "name": "pt3", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.047 "is_configured": true, 00:11:34.047 "data_offset": 2048, 00:11:34.047 "data_size": 63488 00:11:34.047 }, 00:11:34.047 { 00:11:34.047 "name": "pt4", 00:11:34.047 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.047 "is_configured": true, 00:11:34.047 "data_offset": 2048, 00:11:34.047 "data_size": 63488 00:11:34.047 } 00:11:34.047 ] 00:11:34.047 } 00:11:34.047 } 00:11:34.047 }' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:34.047 pt2 00:11:34.047 pt3 00:11:34.047 pt4' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.047 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 [2024-12-12 05:49:41.730012] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=827ea676-bb1d-4e84-a83f-16e75d523cc2 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 827ea676-bb1d-4e84-a83f-16e75d523cc2 ']' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 [2024-12-12 05:49:41.765656] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.307 [2024-12-12 05:49:41.765724] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.307 [2024-12-12 05:49:41.765817] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.307 [2024-12-12 05:49:41.765921] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.307 [2024-12-12 05:49:41.765975] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.307 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.567 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.567 [2024-12-12 05:49:41.929390] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:34.567 [2024-12-12 05:49:41.931248] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:34.567 [2024-12-12 05:49:41.931357] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:34.567 [2024-12-12 05:49:41.931409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:34.567 [2024-12-12 05:49:41.931484] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:34.567 [2024-12-12 05:49:41.931595] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:34.567 [2024-12-12 05:49:41.931664] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:34.567 [2024-12-12 05:49:41.931730] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:34.567 [2024-12-12 05:49:41.931780] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:34.567 [2024-12-12 05:49:41.931816] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:34.567 request: 00:11:34.567 { 00:11:34.567 "name": "raid_bdev1", 00:11:34.567 "raid_level": "concat", 00:11:34.567 "base_bdevs": [ 00:11:34.567 "malloc1", 00:11:34.567 "malloc2", 00:11:34.567 "malloc3", 00:11:34.567 "malloc4" 00:11:34.567 ], 00:11:34.567 "strip_size_kb": 64, 00:11:34.567 "superblock": false, 00:11:34.567 "method": "bdev_raid_create", 00:11:34.567 "req_id": 1 00:11:34.567 } 00:11:34.567 Got JSON-RPC error response 00:11:34.567 response: 00:11:34.567 { 00:11:34.568 "code": -17, 00:11:34.568 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:34.568 } 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.568 05:49:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.568 [2024-12-12 05:49:41.997247] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:34.568 [2024-12-12 05:49:41.997343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:34.568 [2024-12-12 05:49:41.997383] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:34.568 [2024-12-12 05:49:41.997414] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:34.568 [2024-12-12 05:49:41.999669] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:34.568 [2024-12-12 05:49:41.999748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:34.568 [2024-12-12 05:49:41.999856] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:34.568 [2024-12-12 05:49:41.999951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:34.568 pt1 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.568 "name": "raid_bdev1", 00:11:34.568 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:34.568 "strip_size_kb": 64, 00:11:34.568 "state": "configuring", 00:11:34.568 "raid_level": "concat", 00:11:34.568 "superblock": true, 00:11:34.568 "num_base_bdevs": 4, 00:11:34.568 "num_base_bdevs_discovered": 1, 00:11:34.568 "num_base_bdevs_operational": 4, 00:11:34.568 "base_bdevs_list": [ 00:11:34.568 { 00:11:34.568 "name": "pt1", 00:11:34.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:34.568 "is_configured": true, 00:11:34.568 "data_offset": 2048, 00:11:34.568 "data_size": 63488 00:11:34.568 }, 00:11:34.568 { 00:11:34.568 "name": null, 00:11:34.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:34.568 "is_configured": false, 00:11:34.568 "data_offset": 2048, 00:11:34.568 "data_size": 63488 00:11:34.568 }, 00:11:34.568 { 00:11:34.568 "name": null, 00:11:34.568 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:34.568 "is_configured": false, 00:11:34.568 "data_offset": 2048, 00:11:34.568 "data_size": 63488 00:11:34.568 }, 00:11:34.568 { 00:11:34.568 "name": null, 00:11:34.568 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:34.568 "is_configured": false, 00:11:34.568 "data_offset": 2048, 00:11:34.568 "data_size": 63488 00:11:34.568 } 00:11:34.568 ] 00:11:34.568 }' 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.568 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.137 [2024-12-12 05:49:42.452512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.137 [2024-12-12 05:49:42.452668] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.137 [2024-12-12 05:49:42.452705] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:35.137 [2024-12-12 05:49:42.452735] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.137 [2024-12-12 05:49:42.453209] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.137 [2024-12-12 05:49:42.453271] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.137 [2024-12-12 05:49:42.453393] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.137 [2024-12-12 05:49:42.453447] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.137 pt2 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.137 [2024-12-12 05:49:42.464467] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.137 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.138 "name": "raid_bdev1", 00:11:35.138 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:35.138 "strip_size_kb": 64, 00:11:35.138 "state": "configuring", 00:11:35.138 "raid_level": "concat", 00:11:35.138 "superblock": true, 00:11:35.138 "num_base_bdevs": 4, 00:11:35.138 "num_base_bdevs_discovered": 1, 00:11:35.138 "num_base_bdevs_operational": 4, 00:11:35.138 "base_bdevs_list": [ 00:11:35.138 { 00:11:35.138 "name": "pt1", 00:11:35.138 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.138 "is_configured": true, 00:11:35.138 "data_offset": 2048, 00:11:35.138 "data_size": 63488 00:11:35.138 }, 00:11:35.138 { 00:11:35.138 "name": null, 00:11:35.138 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.138 "is_configured": false, 00:11:35.138 "data_offset": 0, 00:11:35.138 "data_size": 63488 00:11:35.138 }, 00:11:35.138 { 00:11:35.138 "name": null, 00:11:35.138 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.138 "is_configured": false, 00:11:35.138 "data_offset": 2048, 00:11:35.138 "data_size": 63488 00:11:35.138 }, 00:11:35.138 { 00:11:35.138 "name": null, 00:11:35.138 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.138 "is_configured": false, 00:11:35.138 "data_offset": 2048, 00:11:35.138 "data_size": 63488 00:11:35.138 } 00:11:35.138 ] 00:11:35.138 }' 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.138 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.707 [2024-12-12 05:49:42.927667] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:35.707 [2024-12-12 05:49:42.927817] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.707 [2024-12-12 05:49:42.927855] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:35.707 [2024-12-12 05:49:42.927882] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.707 [2024-12-12 05:49:42.928381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.707 [2024-12-12 05:49:42.928447] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:35.707 [2024-12-12 05:49:42.928581] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:35.707 [2024-12-12 05:49:42.928632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:35.707 pt2 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.707 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.707 [2024-12-12 05:49:42.939621] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:35.708 [2024-12-12 05:49:42.939712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.708 [2024-12-12 05:49:42.939760] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:35.708 [2024-12-12 05:49:42.939786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.708 [2024-12-12 05:49:42.940192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.708 [2024-12-12 05:49:42.940244] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:35.708 [2024-12-12 05:49:42.940337] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:35.708 [2024-12-12 05:49:42.940394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:35.708 pt3 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.708 [2024-12-12 05:49:42.951603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:35.708 [2024-12-12 05:49:42.951646] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:35.708 [2024-12-12 05:49:42.951676] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:35.708 [2024-12-12 05:49:42.951684] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:35.708 [2024-12-12 05:49:42.952021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:35.708 [2024-12-12 05:49:42.952036] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:35.708 [2024-12-12 05:49:42.952091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:35.708 [2024-12-12 05:49:42.952110] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:35.708 [2024-12-12 05:49:42.952229] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:35.708 [2024-12-12 05:49:42.952249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:35.708 [2024-12-12 05:49:42.952463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:35.708 [2024-12-12 05:49:42.952626] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:35.708 [2024-12-12 05:49:42.952639] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:35.708 [2024-12-12 05:49:42.952806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:35.708 pt4 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:35.708 05:49:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.708 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:35.708 "name": "raid_bdev1", 00:11:35.708 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:35.708 "strip_size_kb": 64, 00:11:35.708 "state": "online", 00:11:35.708 "raid_level": "concat", 00:11:35.708 "superblock": true, 00:11:35.708 "num_base_bdevs": 4, 00:11:35.708 "num_base_bdevs_discovered": 4, 00:11:35.708 "num_base_bdevs_operational": 4, 00:11:35.708 "base_bdevs_list": [ 00:11:35.708 { 00:11:35.708 "name": "pt1", 00:11:35.708 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.708 "is_configured": true, 00:11:35.708 "data_offset": 2048, 00:11:35.708 "data_size": 63488 00:11:35.708 }, 00:11:35.708 { 00:11:35.708 "name": "pt2", 00:11:35.708 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.708 "is_configured": true, 00:11:35.708 "data_offset": 2048, 00:11:35.708 "data_size": 63488 00:11:35.708 }, 00:11:35.708 { 00:11:35.708 "name": "pt3", 00:11:35.708 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.708 "is_configured": true, 00:11:35.708 "data_offset": 2048, 00:11:35.708 "data_size": 63488 00:11:35.708 }, 00:11:35.708 { 00:11:35.708 "name": "pt4", 00:11:35.708 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.708 "is_configured": true, 00:11:35.708 "data_offset": 2048, 00:11:35.708 "data_size": 63488 00:11:35.708 } 00:11:35.708 ] 00:11:35.708 }' 00:11:35.708 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:35.708 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:35.969 [2024-12-12 05:49:43.427142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:35.969 "name": "raid_bdev1", 00:11:35.969 "aliases": [ 00:11:35.969 "827ea676-bb1d-4e84-a83f-16e75d523cc2" 00:11:35.969 ], 00:11:35.969 "product_name": "Raid Volume", 00:11:35.969 "block_size": 512, 00:11:35.969 "num_blocks": 253952, 00:11:35.969 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:35.969 "assigned_rate_limits": { 00:11:35.969 "rw_ios_per_sec": 0, 00:11:35.969 "rw_mbytes_per_sec": 0, 00:11:35.969 "r_mbytes_per_sec": 0, 00:11:35.969 "w_mbytes_per_sec": 0 00:11:35.969 }, 00:11:35.969 "claimed": false, 00:11:35.969 "zoned": false, 00:11:35.969 "supported_io_types": { 00:11:35.969 "read": true, 00:11:35.969 "write": true, 00:11:35.969 "unmap": true, 00:11:35.969 "flush": true, 00:11:35.969 "reset": true, 00:11:35.969 "nvme_admin": false, 00:11:35.969 "nvme_io": false, 00:11:35.969 "nvme_io_md": false, 00:11:35.969 "write_zeroes": true, 00:11:35.969 "zcopy": false, 00:11:35.969 "get_zone_info": false, 00:11:35.969 "zone_management": false, 00:11:35.969 "zone_append": false, 00:11:35.969 "compare": false, 00:11:35.969 "compare_and_write": false, 00:11:35.969 "abort": false, 00:11:35.969 "seek_hole": false, 00:11:35.969 "seek_data": false, 00:11:35.969 "copy": false, 00:11:35.969 "nvme_iov_md": false 00:11:35.969 }, 00:11:35.969 "memory_domains": [ 00:11:35.969 { 00:11:35.969 "dma_device_id": "system", 00:11:35.969 "dma_device_type": 1 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.969 "dma_device_type": 2 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "system", 00:11:35.969 "dma_device_type": 1 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.969 "dma_device_type": 2 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "system", 00:11:35.969 "dma_device_type": 1 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.969 "dma_device_type": 2 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "system", 00:11:35.969 "dma_device_type": 1 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.969 "dma_device_type": 2 00:11:35.969 } 00:11:35.969 ], 00:11:35.969 "driver_specific": { 00:11:35.969 "raid": { 00:11:35.969 "uuid": "827ea676-bb1d-4e84-a83f-16e75d523cc2", 00:11:35.969 "strip_size_kb": 64, 00:11:35.969 "state": "online", 00:11:35.969 "raid_level": "concat", 00:11:35.969 "superblock": true, 00:11:35.969 "num_base_bdevs": 4, 00:11:35.969 "num_base_bdevs_discovered": 4, 00:11:35.969 "num_base_bdevs_operational": 4, 00:11:35.969 "base_bdevs_list": [ 00:11:35.969 { 00:11:35.969 "name": "pt1", 00:11:35.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:35.969 "is_configured": true, 00:11:35.969 "data_offset": 2048, 00:11:35.969 "data_size": 63488 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "name": "pt2", 00:11:35.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:35.969 "is_configured": true, 00:11:35.969 "data_offset": 2048, 00:11:35.969 "data_size": 63488 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "name": "pt3", 00:11:35.969 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:35.969 "is_configured": true, 00:11:35.969 "data_offset": 2048, 00:11:35.969 "data_size": 63488 00:11:35.969 }, 00:11:35.969 { 00:11:35.969 "name": "pt4", 00:11:35.969 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:35.969 "is_configured": true, 00:11:35.969 "data_offset": 2048, 00:11:35.969 "data_size": 63488 00:11:35.969 } 00:11:35.969 ] 00:11:35.969 } 00:11:35.969 } 00:11:35.969 }' 00:11:35.969 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:36.229 pt2 00:11:36.229 pt3 00:11:36.229 pt4' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.229 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.489 [2024-12-12 05:49:43.774525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 827ea676-bb1d-4e84-a83f-16e75d523cc2 '!=' 827ea676-bb1d-4e84-a83f-16e75d523cc2 ']' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 73520 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 73520 ']' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 73520 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73520 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73520' 00:11:36.489 killing process with pid 73520 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 73520 00:11:36.489 [2024-12-12 05:49:43.832005] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:36.489 [2024-12-12 05:49:43.832153] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:36.489 05:49:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 73520 00:11:36.489 [2024-12-12 05:49:43.832307] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:36.489 [2024-12-12 05:49:43.832354] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:37.057 [2024-12-12 05:49:44.272637] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:38.438 05:49:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:38.438 00:11:38.438 real 0m5.676s 00:11:38.438 user 0m8.016s 00:11:38.438 sys 0m0.944s 00:11:38.438 05:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.438 05:49:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.438 ************************************ 00:11:38.438 END TEST raid_superblock_test 00:11:38.438 ************************************ 00:11:38.438 05:49:45 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:11:38.438 05:49:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:38.438 05:49:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.438 05:49:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:38.438 ************************************ 00:11:38.438 START TEST raid_read_error_test 00:11:38.438 ************************************ 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.5q3UMms6ZQ 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73785 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73785 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 73785 ']' 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.438 05:49:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.438 [2024-12-12 05:49:45.719192] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:38.438 [2024-12-12 05:49:45.719393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73785 ] 00:11:38.438 [2024-12-12 05:49:45.897222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:38.698 [2024-12-12 05:49:46.035766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.958 [2024-12-12 05:49:46.276671] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:38.958 [2024-12-12 05:49:46.276707] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 BaseBdev1_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 true 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 [2024-12-12 05:49:46.590175] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:39.217 [2024-12-12 05:49:46.590291] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.217 [2024-12-12 05:49:46.590341] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:39.217 [2024-12-12 05:49:46.590393] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.217 [2024-12-12 05:49:46.593069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.217 [2024-12-12 05:49:46.593146] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:39.217 BaseBdev1 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 BaseBdev2_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 true 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 [2024-12-12 05:49:46.665162] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:39.217 [2024-12-12 05:49:46.665265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.217 [2024-12-12 05:49:46.665287] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:39.217 [2024-12-12 05:49:46.665300] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.217 [2024-12-12 05:49:46.667806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.217 [2024-12-12 05:49:46.667898] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:39.217 BaseBdev2 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.217 BaseBdev3_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.217 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 true 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 [2024-12-12 05:49:46.749791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:39.478 [2024-12-12 05:49:46.749889] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.478 [2024-12-12 05:49:46.749910] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:39.478 [2024-12-12 05:49:46.749921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.478 [2024-12-12 05:49:46.752379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.478 [2024-12-12 05:49:46.752460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:39.478 BaseBdev3 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 BaseBdev4_malloc 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 true 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 [2024-12-12 05:49:46.823763] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:39.478 [2024-12-12 05:49:46.823864] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.478 [2024-12-12 05:49:46.823886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:39.478 [2024-12-12 05:49:46.823898] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.478 [2024-12-12 05:49:46.826319] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.478 [2024-12-12 05:49:46.826409] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:39.478 BaseBdev4 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 [2024-12-12 05:49:46.835817] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:39.478 [2024-12-12 05:49:46.837916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.478 [2024-12-12 05:49:46.838032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.478 [2024-12-12 05:49:46.838116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:39.478 [2024-12-12 05:49:46.838389] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:39.478 [2024-12-12 05:49:46.838440] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:39.478 [2024-12-12 05:49:46.838737] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:39.478 [2024-12-12 05:49:46.838965] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:39.478 [2024-12-12 05:49:46.839007] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:39.478 [2024-12-12 05:49:46.839249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.478 "name": "raid_bdev1", 00:11:39.478 "uuid": "237814ca-b6b0-4c49-a756-f3155e257c8f", 00:11:39.478 "strip_size_kb": 64, 00:11:39.478 "state": "online", 00:11:39.478 "raid_level": "concat", 00:11:39.478 "superblock": true, 00:11:39.478 "num_base_bdevs": 4, 00:11:39.478 "num_base_bdevs_discovered": 4, 00:11:39.478 "num_base_bdevs_operational": 4, 00:11:39.478 "base_bdevs_list": [ 00:11:39.478 { 00:11:39.478 "name": "BaseBdev1", 00:11:39.478 "uuid": "35045751-1051-5466-aa95-1fff9f2a34a9", 00:11:39.478 "is_configured": true, 00:11:39.478 "data_offset": 2048, 00:11:39.478 "data_size": 63488 00:11:39.478 }, 00:11:39.478 { 00:11:39.478 "name": "BaseBdev2", 00:11:39.478 "uuid": "9ae2b080-364e-558e-bde8-fc1d7257c3a1", 00:11:39.478 "is_configured": true, 00:11:39.478 "data_offset": 2048, 00:11:39.478 "data_size": 63488 00:11:39.478 }, 00:11:39.478 { 00:11:39.478 "name": "BaseBdev3", 00:11:39.478 "uuid": "69e938ef-0a99-5df0-8cce-b8aed06e4159", 00:11:39.478 "is_configured": true, 00:11:39.478 "data_offset": 2048, 00:11:39.478 "data_size": 63488 00:11:39.478 }, 00:11:39.478 { 00:11:39.478 "name": "BaseBdev4", 00:11:39.478 "uuid": "1288c69c-aa55-5b79-a357-a633332bc2ce", 00:11:39.478 "is_configured": true, 00:11:39.478 "data_offset": 2048, 00:11:39.478 "data_size": 63488 00:11:39.478 } 00:11:39.478 ] 00:11:39.478 }' 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.478 05:49:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.087 05:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:40.087 05:49:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:40.087 [2024-12-12 05:49:47.356437] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.025 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.026 "name": "raid_bdev1", 00:11:41.026 "uuid": "237814ca-b6b0-4c49-a756-f3155e257c8f", 00:11:41.026 "strip_size_kb": 64, 00:11:41.026 "state": "online", 00:11:41.026 "raid_level": "concat", 00:11:41.026 "superblock": true, 00:11:41.026 "num_base_bdevs": 4, 00:11:41.026 "num_base_bdevs_discovered": 4, 00:11:41.026 "num_base_bdevs_operational": 4, 00:11:41.026 "base_bdevs_list": [ 00:11:41.026 { 00:11:41.026 "name": "BaseBdev1", 00:11:41.026 "uuid": "35045751-1051-5466-aa95-1fff9f2a34a9", 00:11:41.026 "is_configured": true, 00:11:41.026 "data_offset": 2048, 00:11:41.026 "data_size": 63488 00:11:41.026 }, 00:11:41.026 { 00:11:41.026 "name": "BaseBdev2", 00:11:41.026 "uuid": "9ae2b080-364e-558e-bde8-fc1d7257c3a1", 00:11:41.026 "is_configured": true, 00:11:41.026 "data_offset": 2048, 00:11:41.026 "data_size": 63488 00:11:41.026 }, 00:11:41.026 { 00:11:41.026 "name": "BaseBdev3", 00:11:41.026 "uuid": "69e938ef-0a99-5df0-8cce-b8aed06e4159", 00:11:41.026 "is_configured": true, 00:11:41.026 "data_offset": 2048, 00:11:41.026 "data_size": 63488 00:11:41.026 }, 00:11:41.026 { 00:11:41.026 "name": "BaseBdev4", 00:11:41.026 "uuid": "1288c69c-aa55-5b79-a357-a633332bc2ce", 00:11:41.026 "is_configured": true, 00:11:41.026 "data_offset": 2048, 00:11:41.026 "data_size": 63488 00:11:41.026 } 00:11:41.026 ] 00:11:41.026 }' 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.026 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.285 [2024-12-12 05:49:48.705387] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.285 [2024-12-12 05:49:48.705527] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.285 [2024-12-12 05:49:48.708243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.285 [2024-12-12 05:49:48.708352] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:41.285 [2024-12-12 05:49:48.708419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.285 [2024-12-12 05:49:48.708497] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:41.285 { 00:11:41.285 "results": [ 00:11:41.285 { 00:11:41.285 "job": "raid_bdev1", 00:11:41.285 "core_mask": "0x1", 00:11:41.285 "workload": "randrw", 00:11:41.285 "percentage": 50, 00:11:41.285 "status": "finished", 00:11:41.285 "queue_depth": 1, 00:11:41.285 "io_size": 131072, 00:11:41.285 "runtime": 1.349558, 00:11:41.285 "iops": 13147.267475721681, 00:11:41.285 "mibps": 1643.4084344652101, 00:11:41.285 "io_failed": 1, 00:11:41.285 "io_timeout": 0, 00:11:41.285 "avg_latency_us": 106.93354963951157, 00:11:41.285 "min_latency_us": 25.3764192139738, 00:11:41.285 "max_latency_us": 1416.6078602620087 00:11:41.285 } 00:11:41.285 ], 00:11:41.285 "core_count": 1 00:11:41.285 } 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73785 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 73785 ']' 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 73785 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73785 00:11:41.285 killing process with pid 73785 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73785' 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 73785 00:11:41.285 [2024-12-12 05:49:48.751325] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:41.285 05:49:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 73785 00:11:41.854 [2024-12-12 05:49:49.106478] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.5q3UMms6ZQ 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:43.236 ************************************ 00:11:43.236 END TEST raid_read_error_test 00:11:43.236 ************************************ 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:11:43.236 00:11:43.236 real 0m4.815s 00:11:43.236 user 0m5.447s 00:11:43.236 sys 0m0.692s 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.236 05:49:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.236 05:49:50 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:11:43.236 05:49:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:43.236 05:49:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.236 05:49:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:43.236 ************************************ 00:11:43.236 START TEST raid_write_error_test 00:11:43.236 ************************************ 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.l5BzariV3T 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73939 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73939 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73939 ']' 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:43.236 05:49:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.236 [2024-12-12 05:49:50.598406] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:43.236 [2024-12-12 05:49:50.598614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73939 ] 00:11:43.236 [2024-12-12 05:49:50.753897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.496 [2024-12-12 05:49:50.893557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.756 [2024-12-12 05:49:51.130349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:43.756 [2024-12-12 05:49:51.130537] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 BaseBdev1_malloc 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 true 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.015 [2024-12-12 05:49:51.493614] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:44.015 [2024-12-12 05:49:51.493758] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.015 [2024-12-12 05:49:51.493784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:44.015 [2024-12-12 05:49:51.493796] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.015 [2024-12-12 05:49:51.496321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.015 [2024-12-12 05:49:51.496367] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:44.015 BaseBdev1 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.015 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 BaseBdev2_malloc 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 true 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 [2024-12-12 05:49:51.568216] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:44.274 [2024-12-12 05:49:51.568358] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.274 [2024-12-12 05:49:51.568382] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:44.274 [2024-12-12 05:49:51.568395] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.274 [2024-12-12 05:49:51.570890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.274 [2024-12-12 05:49:51.570931] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:44.274 BaseBdev2 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 BaseBdev3_malloc 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.274 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.274 true 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 [2024-12-12 05:49:51.657563] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:44.275 [2024-12-12 05:49:51.657691] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.275 [2024-12-12 05:49:51.657714] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:44.275 [2024-12-12 05:49:51.657726] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.275 [2024-12-12 05:49:51.660265] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.275 [2024-12-12 05:49:51.660339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:44.275 BaseBdev3 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 BaseBdev4_malloc 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 true 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 [2024-12-12 05:49:51.733774] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:44.275 [2024-12-12 05:49:51.733941] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:44.275 [2024-12-12 05:49:51.733983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:44.275 [2024-12-12 05:49:51.734022] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:44.275 [2024-12-12 05:49:51.736781] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:44.275 [2024-12-12 05:49:51.736862] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:44.275 BaseBdev4 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 [2024-12-12 05:49:51.745830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:44.275 [2024-12-12 05:49:51.748194] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.275 [2024-12-12 05:49:51.748315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.275 [2024-12-12 05:49:51.748403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.275 [2024-12-12 05:49:51.748700] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:44.275 [2024-12-12 05:49:51.748755] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:44.275 [2024-12-12 05:49:51.749070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:11:44.275 [2024-12-12 05:49:51.749282] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:44.275 [2024-12-12 05:49:51.749325] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:44.275 [2024-12-12 05:49:51.749581] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.275 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.534 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.534 "name": "raid_bdev1", 00:11:44.534 "uuid": "6b7e7c2d-9142-412a-b947-340dcfb0be89", 00:11:44.534 "strip_size_kb": 64, 00:11:44.534 "state": "online", 00:11:44.534 "raid_level": "concat", 00:11:44.534 "superblock": true, 00:11:44.534 "num_base_bdevs": 4, 00:11:44.534 "num_base_bdevs_discovered": 4, 00:11:44.534 "num_base_bdevs_operational": 4, 00:11:44.534 "base_bdevs_list": [ 00:11:44.534 { 00:11:44.534 "name": "BaseBdev1", 00:11:44.534 "uuid": "542f5eae-1667-592d-9d8d-1d5c7f4e95ab", 00:11:44.534 "is_configured": true, 00:11:44.534 "data_offset": 2048, 00:11:44.534 "data_size": 63488 00:11:44.534 }, 00:11:44.534 { 00:11:44.534 "name": "BaseBdev2", 00:11:44.534 "uuid": "c355e693-2dcb-5ffa-8058-d5227dd7a3be", 00:11:44.534 "is_configured": true, 00:11:44.534 "data_offset": 2048, 00:11:44.534 "data_size": 63488 00:11:44.534 }, 00:11:44.534 { 00:11:44.534 "name": "BaseBdev3", 00:11:44.534 "uuid": "5c018583-ad7a-5caa-9b93-4e5b35734b08", 00:11:44.534 "is_configured": true, 00:11:44.534 "data_offset": 2048, 00:11:44.534 "data_size": 63488 00:11:44.534 }, 00:11:44.534 { 00:11:44.534 "name": "BaseBdev4", 00:11:44.534 "uuid": "75caa160-3356-513f-8151-6bb3ad5ec742", 00:11:44.534 "is_configured": true, 00:11:44.534 "data_offset": 2048, 00:11:44.534 "data_size": 63488 00:11:44.534 } 00:11:44.534 ] 00:11:44.534 }' 00:11:44.534 05:49:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.534 05:49:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.793 05:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:44.793 05:49:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:44.793 [2024-12-12 05:49:52.258308] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.727 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.728 "name": "raid_bdev1", 00:11:45.728 "uuid": "6b7e7c2d-9142-412a-b947-340dcfb0be89", 00:11:45.728 "strip_size_kb": 64, 00:11:45.728 "state": "online", 00:11:45.728 "raid_level": "concat", 00:11:45.728 "superblock": true, 00:11:45.728 "num_base_bdevs": 4, 00:11:45.728 "num_base_bdevs_discovered": 4, 00:11:45.728 "num_base_bdevs_operational": 4, 00:11:45.728 "base_bdevs_list": [ 00:11:45.728 { 00:11:45.728 "name": "BaseBdev1", 00:11:45.728 "uuid": "542f5eae-1667-592d-9d8d-1d5c7f4e95ab", 00:11:45.728 "is_configured": true, 00:11:45.728 "data_offset": 2048, 00:11:45.728 "data_size": 63488 00:11:45.728 }, 00:11:45.728 { 00:11:45.728 "name": "BaseBdev2", 00:11:45.728 "uuid": "c355e693-2dcb-5ffa-8058-d5227dd7a3be", 00:11:45.728 "is_configured": true, 00:11:45.728 "data_offset": 2048, 00:11:45.728 "data_size": 63488 00:11:45.728 }, 00:11:45.728 { 00:11:45.728 "name": "BaseBdev3", 00:11:45.728 "uuid": "5c018583-ad7a-5caa-9b93-4e5b35734b08", 00:11:45.728 "is_configured": true, 00:11:45.728 "data_offset": 2048, 00:11:45.728 "data_size": 63488 00:11:45.728 }, 00:11:45.728 { 00:11:45.728 "name": "BaseBdev4", 00:11:45.728 "uuid": "75caa160-3356-513f-8151-6bb3ad5ec742", 00:11:45.728 "is_configured": true, 00:11:45.728 "data_offset": 2048, 00:11:45.728 "data_size": 63488 00:11:45.728 } 00:11:45.728 ] 00:11:45.728 }' 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.728 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.295 [2024-12-12 05:49:53.619551] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.295 [2024-12-12 05:49:53.619654] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.295 [2024-12-12 05:49:53.622393] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.295 [2024-12-12 05:49:53.622573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.295 [2024-12-12 05:49:53.622683] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.295 [2024-12-12 05:49:53.622736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:46.295 { 00:11:46.295 "results": [ 00:11:46.295 { 00:11:46.295 "job": "raid_bdev1", 00:11:46.295 "core_mask": "0x1", 00:11:46.295 "workload": "randrw", 00:11:46.295 "percentage": 50, 00:11:46.295 "status": "finished", 00:11:46.295 "queue_depth": 1, 00:11:46.295 "io_size": 131072, 00:11:46.295 "runtime": 1.361801, 00:11:46.295 "iops": 12953.434459219812, 00:11:46.295 "mibps": 1619.1793074024765, 00:11:46.295 "io_failed": 1, 00:11:46.295 "io_timeout": 0, 00:11:46.295 "avg_latency_us": 108.47548562561065, 00:11:46.295 "min_latency_us": 26.606113537117903, 00:11:46.295 "max_latency_us": 1395.1441048034935 00:11:46.295 } 00:11:46.295 ], 00:11:46.295 "core_count": 1 00:11:46.295 } 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73939 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73939 ']' 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73939 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73939 00:11:46.295 killing process with pid 73939 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73939' 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73939 00:11:46.295 [2024-12-12 05:49:53.651497] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:46.295 05:49:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73939 00:11:46.554 [2024-12-12 05:49:54.012668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.l5BzariV3T 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:47.940 ************************************ 00:11:47.940 END TEST raid_write_error_test 00:11:47.940 ************************************ 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:47.940 00:11:47.940 real 0m4.841s 00:11:47.940 user 0m5.522s 00:11:47.940 sys 0m0.685s 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.940 05:49:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.940 05:49:55 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:47.940 05:49:55 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:11:47.940 05:49:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:47.940 05:49:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.940 05:49:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.940 ************************************ 00:11:47.940 START TEST raid_state_function_test 00:11:47.940 ************************************ 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:47.940 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=74077 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74077' 00:11:47.941 Process raid pid: 74077 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 74077 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 74077 ']' 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.941 05:49:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.199 [2024-12-12 05:49:55.487589] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:11:48.199 [2024-12-12 05:49:55.487694] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.199 [2024-12-12 05:49:55.663714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.458 [2024-12-12 05:49:55.801506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.717 [2024-12-12 05:49:56.040694] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.717 [2024-12-12 05:49:56.040750] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.975 [2024-12-12 05:49:56.299613] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.975 [2024-12-12 05:49:56.299765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.975 [2024-12-12 05:49:56.299817] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.975 [2024-12-12 05:49:56.299876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.975 [2024-12-12 05:49:56.299936] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.975 [2024-12-12 05:49:56.299973] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.975 [2024-12-12 05:49:56.300004] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:48.975 [2024-12-12 05:49:56.300043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.975 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.975 "name": "Existed_Raid", 00:11:48.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.975 "strip_size_kb": 0, 00:11:48.975 "state": "configuring", 00:11:48.975 "raid_level": "raid1", 00:11:48.975 "superblock": false, 00:11:48.975 "num_base_bdevs": 4, 00:11:48.975 "num_base_bdevs_discovered": 0, 00:11:48.975 "num_base_bdevs_operational": 4, 00:11:48.975 "base_bdevs_list": [ 00:11:48.975 { 00:11:48.975 "name": "BaseBdev1", 00:11:48.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.976 "is_configured": false, 00:11:48.976 "data_offset": 0, 00:11:48.976 "data_size": 0 00:11:48.976 }, 00:11:48.976 { 00:11:48.976 "name": "BaseBdev2", 00:11:48.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.976 "is_configured": false, 00:11:48.976 "data_offset": 0, 00:11:48.976 "data_size": 0 00:11:48.976 }, 00:11:48.976 { 00:11:48.976 "name": "BaseBdev3", 00:11:48.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.976 "is_configured": false, 00:11:48.976 "data_offset": 0, 00:11:48.976 "data_size": 0 00:11:48.976 }, 00:11:48.976 { 00:11:48.976 "name": "BaseBdev4", 00:11:48.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.976 "is_configured": false, 00:11:48.976 "data_offset": 0, 00:11:48.976 "data_size": 0 00:11:48.976 } 00:11:48.976 ] 00:11:48.976 }' 00:11:48.976 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.976 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 [2024-12-12 05:49:56.782722] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.543 [2024-12-12 05:49:56.782840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 [2024-12-12 05:49:56.794675] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.543 [2024-12-12 05:49:56.794763] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.543 [2024-12-12 05:49:56.794790] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.543 [2024-12-12 05:49:56.794813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.543 [2024-12-12 05:49:56.794830] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.543 [2024-12-12 05:49:56.794850] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.543 [2024-12-12 05:49:56.794868] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:49.543 [2024-12-12 05:49:56.794888] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 [2024-12-12 05:49:56.849610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.543 BaseBdev1 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.543 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.543 [ 00:11:49.543 { 00:11:49.543 "name": "BaseBdev1", 00:11:49.543 "aliases": [ 00:11:49.543 "763073be-9f27-4d91-8afc-178f968d9c64" 00:11:49.543 ], 00:11:49.543 "product_name": "Malloc disk", 00:11:49.543 "block_size": 512, 00:11:49.543 "num_blocks": 65536, 00:11:49.543 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:49.543 "assigned_rate_limits": { 00:11:49.543 "rw_ios_per_sec": 0, 00:11:49.543 "rw_mbytes_per_sec": 0, 00:11:49.543 "r_mbytes_per_sec": 0, 00:11:49.543 "w_mbytes_per_sec": 0 00:11:49.543 }, 00:11:49.543 "claimed": true, 00:11:49.543 "claim_type": "exclusive_write", 00:11:49.543 "zoned": false, 00:11:49.543 "supported_io_types": { 00:11:49.543 "read": true, 00:11:49.543 "write": true, 00:11:49.543 "unmap": true, 00:11:49.543 "flush": true, 00:11:49.543 "reset": true, 00:11:49.543 "nvme_admin": false, 00:11:49.543 "nvme_io": false, 00:11:49.543 "nvme_io_md": false, 00:11:49.543 "write_zeroes": true, 00:11:49.543 "zcopy": true, 00:11:49.543 "get_zone_info": false, 00:11:49.543 "zone_management": false, 00:11:49.543 "zone_append": false, 00:11:49.543 "compare": false, 00:11:49.543 "compare_and_write": false, 00:11:49.544 "abort": true, 00:11:49.544 "seek_hole": false, 00:11:49.544 "seek_data": false, 00:11:49.544 "copy": true, 00:11:49.544 "nvme_iov_md": false 00:11:49.544 }, 00:11:49.544 "memory_domains": [ 00:11:49.544 { 00:11:49.544 "dma_device_id": "system", 00:11:49.544 "dma_device_type": 1 00:11:49.544 }, 00:11:49.544 { 00:11:49.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:49.544 "dma_device_type": 2 00:11:49.544 } 00:11:49.544 ], 00:11:49.544 "driver_specific": {} 00:11:49.544 } 00:11:49.544 ] 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.544 "name": "Existed_Raid", 00:11:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.544 "strip_size_kb": 0, 00:11:49.544 "state": "configuring", 00:11:49.544 "raid_level": "raid1", 00:11:49.544 "superblock": false, 00:11:49.544 "num_base_bdevs": 4, 00:11:49.544 "num_base_bdevs_discovered": 1, 00:11:49.544 "num_base_bdevs_operational": 4, 00:11:49.544 "base_bdevs_list": [ 00:11:49.544 { 00:11:49.544 "name": "BaseBdev1", 00:11:49.544 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:49.544 "is_configured": true, 00:11:49.544 "data_offset": 0, 00:11:49.544 "data_size": 65536 00:11:49.544 }, 00:11:49.544 { 00:11:49.544 "name": "BaseBdev2", 00:11:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.544 "is_configured": false, 00:11:49.544 "data_offset": 0, 00:11:49.544 "data_size": 0 00:11:49.544 }, 00:11:49.544 { 00:11:49.544 "name": "BaseBdev3", 00:11:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.544 "is_configured": false, 00:11:49.544 "data_offset": 0, 00:11:49.544 "data_size": 0 00:11:49.544 }, 00:11:49.544 { 00:11:49.544 "name": "BaseBdev4", 00:11:49.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.544 "is_configured": false, 00:11:49.544 "data_offset": 0, 00:11:49.544 "data_size": 0 00:11:49.544 } 00:11:49.544 ] 00:11:49.544 }' 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.544 05:49:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 [2024-12-12 05:49:57.336968] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:50.111 [2024-12-12 05:49:57.337033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.111 [2024-12-12 05:49:57.348975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:50.111 [2024-12-12 05:49:57.351175] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:50.111 [2024-12-12 05:49:57.351261] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:50.111 [2024-12-12 05:49:57.351291] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:50.111 [2024-12-12 05:49:57.351315] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:50.111 [2024-12-12 05:49:57.351333] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:50.111 [2024-12-12 05:49:57.351354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.111 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.112 "name": "Existed_Raid", 00:11:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.112 "strip_size_kb": 0, 00:11:50.112 "state": "configuring", 00:11:50.112 "raid_level": "raid1", 00:11:50.112 "superblock": false, 00:11:50.112 "num_base_bdevs": 4, 00:11:50.112 "num_base_bdevs_discovered": 1, 00:11:50.112 "num_base_bdevs_operational": 4, 00:11:50.112 "base_bdevs_list": [ 00:11:50.112 { 00:11:50.112 "name": "BaseBdev1", 00:11:50.112 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:50.112 "is_configured": true, 00:11:50.112 "data_offset": 0, 00:11:50.112 "data_size": 65536 00:11:50.112 }, 00:11:50.112 { 00:11:50.112 "name": "BaseBdev2", 00:11:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.112 "is_configured": false, 00:11:50.112 "data_offset": 0, 00:11:50.112 "data_size": 0 00:11:50.112 }, 00:11:50.112 { 00:11:50.112 "name": "BaseBdev3", 00:11:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.112 "is_configured": false, 00:11:50.112 "data_offset": 0, 00:11:50.112 "data_size": 0 00:11:50.112 }, 00:11:50.112 { 00:11:50.112 "name": "BaseBdev4", 00:11:50.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.112 "is_configured": false, 00:11:50.112 "data_offset": 0, 00:11:50.112 "data_size": 0 00:11:50.112 } 00:11:50.112 ] 00:11:50.112 }' 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.112 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.371 [2024-12-12 05:49:57.841442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:50.371 BaseBdev2 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.371 [ 00:11:50.371 { 00:11:50.371 "name": "BaseBdev2", 00:11:50.371 "aliases": [ 00:11:50.371 "9d84a3c6-49be-453a-974c-325552d31188" 00:11:50.371 ], 00:11:50.371 "product_name": "Malloc disk", 00:11:50.371 "block_size": 512, 00:11:50.371 "num_blocks": 65536, 00:11:50.371 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:50.371 "assigned_rate_limits": { 00:11:50.371 "rw_ios_per_sec": 0, 00:11:50.371 "rw_mbytes_per_sec": 0, 00:11:50.371 "r_mbytes_per_sec": 0, 00:11:50.371 "w_mbytes_per_sec": 0 00:11:50.371 }, 00:11:50.371 "claimed": true, 00:11:50.371 "claim_type": "exclusive_write", 00:11:50.371 "zoned": false, 00:11:50.371 "supported_io_types": { 00:11:50.371 "read": true, 00:11:50.371 "write": true, 00:11:50.371 "unmap": true, 00:11:50.371 "flush": true, 00:11:50.371 "reset": true, 00:11:50.371 "nvme_admin": false, 00:11:50.371 "nvme_io": false, 00:11:50.371 "nvme_io_md": false, 00:11:50.371 "write_zeroes": true, 00:11:50.371 "zcopy": true, 00:11:50.371 "get_zone_info": false, 00:11:50.371 "zone_management": false, 00:11:50.371 "zone_append": false, 00:11:50.371 "compare": false, 00:11:50.371 "compare_and_write": false, 00:11:50.371 "abort": true, 00:11:50.371 "seek_hole": false, 00:11:50.371 "seek_data": false, 00:11:50.371 "copy": true, 00:11:50.371 "nvme_iov_md": false 00:11:50.371 }, 00:11:50.371 "memory_domains": [ 00:11:50.371 { 00:11:50.371 "dma_device_id": "system", 00:11:50.371 "dma_device_type": 1 00:11:50.371 }, 00:11:50.371 { 00:11:50.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.371 "dma_device_type": 2 00:11:50.371 } 00:11:50.371 ], 00:11:50.371 "driver_specific": {} 00:11:50.371 } 00:11:50.371 ] 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.371 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.630 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.630 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.630 "name": "Existed_Raid", 00:11:50.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.630 "strip_size_kb": 0, 00:11:50.630 "state": "configuring", 00:11:50.630 "raid_level": "raid1", 00:11:50.630 "superblock": false, 00:11:50.630 "num_base_bdevs": 4, 00:11:50.630 "num_base_bdevs_discovered": 2, 00:11:50.630 "num_base_bdevs_operational": 4, 00:11:50.630 "base_bdevs_list": [ 00:11:50.630 { 00:11:50.630 "name": "BaseBdev1", 00:11:50.630 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:50.630 "is_configured": true, 00:11:50.630 "data_offset": 0, 00:11:50.630 "data_size": 65536 00:11:50.630 }, 00:11:50.630 { 00:11:50.630 "name": "BaseBdev2", 00:11:50.630 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:50.630 "is_configured": true, 00:11:50.630 "data_offset": 0, 00:11:50.630 "data_size": 65536 00:11:50.630 }, 00:11:50.630 { 00:11:50.630 "name": "BaseBdev3", 00:11:50.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.630 "is_configured": false, 00:11:50.630 "data_offset": 0, 00:11:50.630 "data_size": 0 00:11:50.630 }, 00:11:50.630 { 00:11:50.630 "name": "BaseBdev4", 00:11:50.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.630 "is_configured": false, 00:11:50.630 "data_offset": 0, 00:11:50.630 "data_size": 0 00:11:50.630 } 00:11:50.630 ] 00:11:50.630 }' 00:11:50.630 05:49:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.630 05:49:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.889 [2024-12-12 05:49:58.389632] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:50.889 BaseBdev3 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.889 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.148 [ 00:11:51.148 { 00:11:51.148 "name": "BaseBdev3", 00:11:51.148 "aliases": [ 00:11:51.148 "fcef5baf-98a0-44fb-b1f7-b1dcad16d876" 00:11:51.148 ], 00:11:51.148 "product_name": "Malloc disk", 00:11:51.148 "block_size": 512, 00:11:51.148 "num_blocks": 65536, 00:11:51.148 "uuid": "fcef5baf-98a0-44fb-b1f7-b1dcad16d876", 00:11:51.148 "assigned_rate_limits": { 00:11:51.148 "rw_ios_per_sec": 0, 00:11:51.148 "rw_mbytes_per_sec": 0, 00:11:51.148 "r_mbytes_per_sec": 0, 00:11:51.148 "w_mbytes_per_sec": 0 00:11:51.148 }, 00:11:51.148 "claimed": true, 00:11:51.148 "claim_type": "exclusive_write", 00:11:51.148 "zoned": false, 00:11:51.148 "supported_io_types": { 00:11:51.148 "read": true, 00:11:51.148 "write": true, 00:11:51.148 "unmap": true, 00:11:51.148 "flush": true, 00:11:51.148 "reset": true, 00:11:51.148 "nvme_admin": false, 00:11:51.148 "nvme_io": false, 00:11:51.148 "nvme_io_md": false, 00:11:51.148 "write_zeroes": true, 00:11:51.148 "zcopy": true, 00:11:51.148 "get_zone_info": false, 00:11:51.148 "zone_management": false, 00:11:51.148 "zone_append": false, 00:11:51.148 "compare": false, 00:11:51.148 "compare_and_write": false, 00:11:51.148 "abort": true, 00:11:51.148 "seek_hole": false, 00:11:51.148 "seek_data": false, 00:11:51.148 "copy": true, 00:11:51.148 "nvme_iov_md": false 00:11:51.148 }, 00:11:51.148 "memory_domains": [ 00:11:51.148 { 00:11:51.148 "dma_device_id": "system", 00:11:51.148 "dma_device_type": 1 00:11:51.148 }, 00:11:51.148 { 00:11:51.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.148 "dma_device_type": 2 00:11:51.148 } 00:11:51.148 ], 00:11:51.148 "driver_specific": {} 00:11:51.148 } 00:11:51.148 ] 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.148 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.149 "name": "Existed_Raid", 00:11:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.149 "strip_size_kb": 0, 00:11:51.149 "state": "configuring", 00:11:51.149 "raid_level": "raid1", 00:11:51.149 "superblock": false, 00:11:51.149 "num_base_bdevs": 4, 00:11:51.149 "num_base_bdevs_discovered": 3, 00:11:51.149 "num_base_bdevs_operational": 4, 00:11:51.149 "base_bdevs_list": [ 00:11:51.149 { 00:11:51.149 "name": "BaseBdev1", 00:11:51.149 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:51.149 "is_configured": true, 00:11:51.149 "data_offset": 0, 00:11:51.149 "data_size": 65536 00:11:51.149 }, 00:11:51.149 { 00:11:51.149 "name": "BaseBdev2", 00:11:51.149 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:51.149 "is_configured": true, 00:11:51.149 "data_offset": 0, 00:11:51.149 "data_size": 65536 00:11:51.149 }, 00:11:51.149 { 00:11:51.149 "name": "BaseBdev3", 00:11:51.149 "uuid": "fcef5baf-98a0-44fb-b1f7-b1dcad16d876", 00:11:51.149 "is_configured": true, 00:11:51.149 "data_offset": 0, 00:11:51.149 "data_size": 65536 00:11:51.149 }, 00:11:51.149 { 00:11:51.149 "name": "BaseBdev4", 00:11:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.149 "is_configured": false, 00:11:51.149 "data_offset": 0, 00:11:51.149 "data_size": 0 00:11:51.149 } 00:11:51.149 ] 00:11:51.149 }' 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.149 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.407 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:51.407 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.407 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.666 [2024-12-12 05:49:58.935554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:51.666 [2024-12-12 05:49:58.935712] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:51.666 [2024-12-12 05:49:58.935726] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:51.666 [2024-12-12 05:49:58.936089] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:51.666 [2024-12-12 05:49:58.936293] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:51.666 [2024-12-12 05:49:58.936308] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:51.666 [2024-12-12 05:49:58.936598] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.666 BaseBdev4 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.666 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.667 [ 00:11:51.667 { 00:11:51.667 "name": "BaseBdev4", 00:11:51.667 "aliases": [ 00:11:51.667 "64f72ac5-d8cd-4c85-8f53-03f679549ae2" 00:11:51.667 ], 00:11:51.667 "product_name": "Malloc disk", 00:11:51.667 "block_size": 512, 00:11:51.667 "num_blocks": 65536, 00:11:51.667 "uuid": "64f72ac5-d8cd-4c85-8f53-03f679549ae2", 00:11:51.667 "assigned_rate_limits": { 00:11:51.667 "rw_ios_per_sec": 0, 00:11:51.667 "rw_mbytes_per_sec": 0, 00:11:51.667 "r_mbytes_per_sec": 0, 00:11:51.667 "w_mbytes_per_sec": 0 00:11:51.667 }, 00:11:51.667 "claimed": true, 00:11:51.667 "claim_type": "exclusive_write", 00:11:51.667 "zoned": false, 00:11:51.667 "supported_io_types": { 00:11:51.667 "read": true, 00:11:51.667 "write": true, 00:11:51.667 "unmap": true, 00:11:51.667 "flush": true, 00:11:51.667 "reset": true, 00:11:51.667 "nvme_admin": false, 00:11:51.667 "nvme_io": false, 00:11:51.667 "nvme_io_md": false, 00:11:51.667 "write_zeroes": true, 00:11:51.667 "zcopy": true, 00:11:51.667 "get_zone_info": false, 00:11:51.667 "zone_management": false, 00:11:51.667 "zone_append": false, 00:11:51.667 "compare": false, 00:11:51.667 "compare_and_write": false, 00:11:51.667 "abort": true, 00:11:51.667 "seek_hole": false, 00:11:51.667 "seek_data": false, 00:11:51.667 "copy": true, 00:11:51.667 "nvme_iov_md": false 00:11:51.667 }, 00:11:51.667 "memory_domains": [ 00:11:51.667 { 00:11:51.667 "dma_device_id": "system", 00:11:51.667 "dma_device_type": 1 00:11:51.667 }, 00:11:51.667 { 00:11:51.667 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.667 "dma_device_type": 2 00:11:51.667 } 00:11:51.667 ], 00:11:51.667 "driver_specific": {} 00:11:51.667 } 00:11:51.667 ] 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.667 05:49:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.667 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.667 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.667 "name": "Existed_Raid", 00:11:51.667 "uuid": "da9ca158-2bd3-4522-b5be-e2f166017c57", 00:11:51.667 "strip_size_kb": 0, 00:11:51.667 "state": "online", 00:11:51.667 "raid_level": "raid1", 00:11:51.667 "superblock": false, 00:11:51.667 "num_base_bdevs": 4, 00:11:51.667 "num_base_bdevs_discovered": 4, 00:11:51.667 "num_base_bdevs_operational": 4, 00:11:51.667 "base_bdevs_list": [ 00:11:51.667 { 00:11:51.667 "name": "BaseBdev1", 00:11:51.667 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:51.667 "is_configured": true, 00:11:51.667 "data_offset": 0, 00:11:51.667 "data_size": 65536 00:11:51.667 }, 00:11:51.667 { 00:11:51.667 "name": "BaseBdev2", 00:11:51.667 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:51.667 "is_configured": true, 00:11:51.667 "data_offset": 0, 00:11:51.667 "data_size": 65536 00:11:51.667 }, 00:11:51.667 { 00:11:51.667 "name": "BaseBdev3", 00:11:51.667 "uuid": "fcef5baf-98a0-44fb-b1f7-b1dcad16d876", 00:11:51.667 "is_configured": true, 00:11:51.667 "data_offset": 0, 00:11:51.667 "data_size": 65536 00:11:51.667 }, 00:11:51.667 { 00:11:51.667 "name": "BaseBdev4", 00:11:51.667 "uuid": "64f72ac5-d8cd-4c85-8f53-03f679549ae2", 00:11:51.667 "is_configured": true, 00:11:51.667 "data_offset": 0, 00:11:51.667 "data_size": 65536 00:11:51.667 } 00:11:51.667 ] 00:11:51.667 }' 00:11:51.667 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.667 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.926 [2024-12-12 05:49:59.407224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.926 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:51.926 "name": "Existed_Raid", 00:11:51.926 "aliases": [ 00:11:51.926 "da9ca158-2bd3-4522-b5be-e2f166017c57" 00:11:51.926 ], 00:11:51.926 "product_name": "Raid Volume", 00:11:51.926 "block_size": 512, 00:11:51.926 "num_blocks": 65536, 00:11:51.926 "uuid": "da9ca158-2bd3-4522-b5be-e2f166017c57", 00:11:51.926 "assigned_rate_limits": { 00:11:51.926 "rw_ios_per_sec": 0, 00:11:51.926 "rw_mbytes_per_sec": 0, 00:11:51.926 "r_mbytes_per_sec": 0, 00:11:51.926 "w_mbytes_per_sec": 0 00:11:51.926 }, 00:11:51.926 "claimed": false, 00:11:51.926 "zoned": false, 00:11:51.926 "supported_io_types": { 00:11:51.926 "read": true, 00:11:51.926 "write": true, 00:11:51.926 "unmap": false, 00:11:51.926 "flush": false, 00:11:51.926 "reset": true, 00:11:51.926 "nvme_admin": false, 00:11:51.926 "nvme_io": false, 00:11:51.926 "nvme_io_md": false, 00:11:51.926 "write_zeroes": true, 00:11:51.926 "zcopy": false, 00:11:51.926 "get_zone_info": false, 00:11:51.926 "zone_management": false, 00:11:51.926 "zone_append": false, 00:11:51.926 "compare": false, 00:11:51.926 "compare_and_write": false, 00:11:51.926 "abort": false, 00:11:51.926 "seek_hole": false, 00:11:51.926 "seek_data": false, 00:11:51.926 "copy": false, 00:11:51.926 "nvme_iov_md": false 00:11:51.926 }, 00:11:51.926 "memory_domains": [ 00:11:51.926 { 00:11:51.926 "dma_device_id": "system", 00:11:51.926 "dma_device_type": 1 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.926 "dma_device_type": 2 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "system", 00:11:51.926 "dma_device_type": 1 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.926 "dma_device_type": 2 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "system", 00:11:51.926 "dma_device_type": 1 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.926 "dma_device_type": 2 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "system", 00:11:51.926 "dma_device_type": 1 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:51.926 "dma_device_type": 2 00:11:51.926 } 00:11:51.926 ], 00:11:51.926 "driver_specific": { 00:11:51.926 "raid": { 00:11:51.926 "uuid": "da9ca158-2bd3-4522-b5be-e2f166017c57", 00:11:51.926 "strip_size_kb": 0, 00:11:51.926 "state": "online", 00:11:51.926 "raid_level": "raid1", 00:11:51.926 "superblock": false, 00:11:51.926 "num_base_bdevs": 4, 00:11:51.926 "num_base_bdevs_discovered": 4, 00:11:51.926 "num_base_bdevs_operational": 4, 00:11:51.926 "base_bdevs_list": [ 00:11:51.926 { 00:11:51.926 "name": "BaseBdev1", 00:11:51.926 "uuid": "763073be-9f27-4d91-8afc-178f968d9c64", 00:11:51.926 "is_configured": true, 00:11:51.926 "data_offset": 0, 00:11:51.926 "data_size": 65536 00:11:51.926 }, 00:11:51.926 { 00:11:51.926 "name": "BaseBdev2", 00:11:51.926 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:51.926 "is_configured": true, 00:11:51.926 "data_offset": 0, 00:11:51.926 "data_size": 65536 00:11:51.926 }, 00:11:51.926 { 00:11:51.927 "name": "BaseBdev3", 00:11:51.927 "uuid": "fcef5baf-98a0-44fb-b1f7-b1dcad16d876", 00:11:51.927 "is_configured": true, 00:11:51.927 "data_offset": 0, 00:11:51.927 "data_size": 65536 00:11:51.927 }, 00:11:51.927 { 00:11:51.927 "name": "BaseBdev4", 00:11:51.927 "uuid": "64f72ac5-d8cd-4c85-8f53-03f679549ae2", 00:11:51.927 "is_configured": true, 00:11:51.927 "data_offset": 0, 00:11:51.927 "data_size": 65536 00:11:51.927 } 00:11:51.927 ] 00:11:51.927 } 00:11:51.927 } 00:11:51.927 }' 00:11:51.927 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:52.186 BaseBdev2 00:11:52.186 BaseBdev3 00:11:52.186 BaseBdev4' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.186 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.186 [2024-12-12 05:49:59.682483] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:52.445 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:52.446 "name": "Existed_Raid", 00:11:52.446 "uuid": "da9ca158-2bd3-4522-b5be-e2f166017c57", 00:11:52.446 "strip_size_kb": 0, 00:11:52.446 "state": "online", 00:11:52.446 "raid_level": "raid1", 00:11:52.446 "superblock": false, 00:11:52.446 "num_base_bdevs": 4, 00:11:52.446 "num_base_bdevs_discovered": 3, 00:11:52.446 "num_base_bdevs_operational": 3, 00:11:52.446 "base_bdevs_list": [ 00:11:52.446 { 00:11:52.446 "name": null, 00:11:52.446 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.446 "is_configured": false, 00:11:52.446 "data_offset": 0, 00:11:52.446 "data_size": 65536 00:11:52.446 }, 00:11:52.446 { 00:11:52.446 "name": "BaseBdev2", 00:11:52.446 "uuid": "9d84a3c6-49be-453a-974c-325552d31188", 00:11:52.446 "is_configured": true, 00:11:52.446 "data_offset": 0, 00:11:52.446 "data_size": 65536 00:11:52.446 }, 00:11:52.446 { 00:11:52.446 "name": "BaseBdev3", 00:11:52.446 "uuid": "fcef5baf-98a0-44fb-b1f7-b1dcad16d876", 00:11:52.446 "is_configured": true, 00:11:52.446 "data_offset": 0, 00:11:52.446 "data_size": 65536 00:11:52.446 }, 00:11:52.446 { 00:11:52.446 "name": "BaseBdev4", 00:11:52.446 "uuid": "64f72ac5-d8cd-4c85-8f53-03f679549ae2", 00:11:52.446 "is_configured": true, 00:11:52.446 "data_offset": 0, 00:11:52.446 "data_size": 65536 00:11:52.446 } 00:11:52.446 ] 00:11:52.446 }' 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:52.446 05:49:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.705 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.963 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.963 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.963 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.964 [2024-12-12 05:50:00.276317] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.964 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.964 [2024-12-12 05:50:00.423513] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.223 [2024-12-12 05:50:00.563586] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:53.223 [2024-12-12 05:50:00.563682] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.223 [2024-12-12 05:50:00.660088] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.223 [2024-12-12 05:50:00.660188] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:53.223 [2024-12-12 05:50:00.660207] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.223 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 BaseBdev2 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 [ 00:11:53.483 { 00:11:53.483 "name": "BaseBdev2", 00:11:53.483 "aliases": [ 00:11:53.483 "5bdbd0fe-caff-42b1-8510-1deff72e7662" 00:11:53.483 ], 00:11:53.483 "product_name": "Malloc disk", 00:11:53.483 "block_size": 512, 00:11:53.483 "num_blocks": 65536, 00:11:53.483 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:53.483 "assigned_rate_limits": { 00:11:53.483 "rw_ios_per_sec": 0, 00:11:53.483 "rw_mbytes_per_sec": 0, 00:11:53.483 "r_mbytes_per_sec": 0, 00:11:53.483 "w_mbytes_per_sec": 0 00:11:53.483 }, 00:11:53.483 "claimed": false, 00:11:53.483 "zoned": false, 00:11:53.483 "supported_io_types": { 00:11:53.483 "read": true, 00:11:53.483 "write": true, 00:11:53.483 "unmap": true, 00:11:53.483 "flush": true, 00:11:53.483 "reset": true, 00:11:53.483 "nvme_admin": false, 00:11:53.483 "nvme_io": false, 00:11:53.483 "nvme_io_md": false, 00:11:53.483 "write_zeroes": true, 00:11:53.483 "zcopy": true, 00:11:53.483 "get_zone_info": false, 00:11:53.483 "zone_management": false, 00:11:53.483 "zone_append": false, 00:11:53.483 "compare": false, 00:11:53.483 "compare_and_write": false, 00:11:53.483 "abort": true, 00:11:53.483 "seek_hole": false, 00:11:53.483 "seek_data": false, 00:11:53.483 "copy": true, 00:11:53.483 "nvme_iov_md": false 00:11:53.483 }, 00:11:53.483 "memory_domains": [ 00:11:53.483 { 00:11:53.483 "dma_device_id": "system", 00:11:53.483 "dma_device_type": 1 00:11:53.483 }, 00:11:53.483 { 00:11:53.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.483 "dma_device_type": 2 00:11:53.483 } 00:11:53.483 ], 00:11:53.483 "driver_specific": {} 00:11:53.483 } 00:11:53.483 ] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 BaseBdev3 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 [ 00:11:53.483 { 00:11:53.483 "name": "BaseBdev3", 00:11:53.483 "aliases": [ 00:11:53.483 "654645b3-e379-45e3-a5ff-7aad92f69ee8" 00:11:53.483 ], 00:11:53.483 "product_name": "Malloc disk", 00:11:53.483 "block_size": 512, 00:11:53.483 "num_blocks": 65536, 00:11:53.483 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:53.483 "assigned_rate_limits": { 00:11:53.483 "rw_ios_per_sec": 0, 00:11:53.483 "rw_mbytes_per_sec": 0, 00:11:53.483 "r_mbytes_per_sec": 0, 00:11:53.483 "w_mbytes_per_sec": 0 00:11:53.483 }, 00:11:53.483 "claimed": false, 00:11:53.483 "zoned": false, 00:11:53.483 "supported_io_types": { 00:11:53.483 "read": true, 00:11:53.483 "write": true, 00:11:53.483 "unmap": true, 00:11:53.483 "flush": true, 00:11:53.483 "reset": true, 00:11:53.483 "nvme_admin": false, 00:11:53.483 "nvme_io": false, 00:11:53.483 "nvme_io_md": false, 00:11:53.483 "write_zeroes": true, 00:11:53.483 "zcopy": true, 00:11:53.483 "get_zone_info": false, 00:11:53.483 "zone_management": false, 00:11:53.483 "zone_append": false, 00:11:53.483 "compare": false, 00:11:53.483 "compare_and_write": false, 00:11:53.483 "abort": true, 00:11:53.483 "seek_hole": false, 00:11:53.483 "seek_data": false, 00:11:53.483 "copy": true, 00:11:53.483 "nvme_iov_md": false 00:11:53.483 }, 00:11:53.483 "memory_domains": [ 00:11:53.483 { 00:11:53.483 "dma_device_id": "system", 00:11:53.483 "dma_device_type": 1 00:11:53.483 }, 00:11:53.483 { 00:11:53.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.483 "dma_device_type": 2 00:11:53.483 } 00:11:53.483 ], 00:11:53.483 "driver_specific": {} 00:11:53.483 } 00:11:53.483 ] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.483 BaseBdev4 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:53.483 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 [ 00:11:53.484 { 00:11:53.484 "name": "BaseBdev4", 00:11:53.484 "aliases": [ 00:11:53.484 "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d" 00:11:53.484 ], 00:11:53.484 "product_name": "Malloc disk", 00:11:53.484 "block_size": 512, 00:11:53.484 "num_blocks": 65536, 00:11:53.484 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:53.484 "assigned_rate_limits": { 00:11:53.484 "rw_ios_per_sec": 0, 00:11:53.484 "rw_mbytes_per_sec": 0, 00:11:53.484 "r_mbytes_per_sec": 0, 00:11:53.484 "w_mbytes_per_sec": 0 00:11:53.484 }, 00:11:53.484 "claimed": false, 00:11:53.484 "zoned": false, 00:11:53.484 "supported_io_types": { 00:11:53.484 "read": true, 00:11:53.484 "write": true, 00:11:53.484 "unmap": true, 00:11:53.484 "flush": true, 00:11:53.484 "reset": true, 00:11:53.484 "nvme_admin": false, 00:11:53.484 "nvme_io": false, 00:11:53.484 "nvme_io_md": false, 00:11:53.484 "write_zeroes": true, 00:11:53.484 "zcopy": true, 00:11:53.484 "get_zone_info": false, 00:11:53.484 "zone_management": false, 00:11:53.484 "zone_append": false, 00:11:53.484 "compare": false, 00:11:53.484 "compare_and_write": false, 00:11:53.484 "abort": true, 00:11:53.484 "seek_hole": false, 00:11:53.484 "seek_data": false, 00:11:53.484 "copy": true, 00:11:53.484 "nvme_iov_md": false 00:11:53.484 }, 00:11:53.484 "memory_domains": [ 00:11:53.484 { 00:11:53.484 "dma_device_id": "system", 00:11:53.484 "dma_device_type": 1 00:11:53.484 }, 00:11:53.484 { 00:11:53.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.484 "dma_device_type": 2 00:11:53.484 } 00:11:53.484 ], 00:11:53.484 "driver_specific": {} 00:11:53.484 } 00:11:53.484 ] 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 [2024-12-12 05:50:00.954233] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:53.484 [2024-12-12 05:50:00.954355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:53.484 [2024-12-12 05:50:00.954406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:53.484 [2024-12-12 05:50:00.956247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.484 [2024-12-12 05:50:00.956362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.484 05:50:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.742 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.742 "name": "Existed_Raid", 00:11:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.742 "strip_size_kb": 0, 00:11:53.742 "state": "configuring", 00:11:53.742 "raid_level": "raid1", 00:11:53.742 "superblock": false, 00:11:53.742 "num_base_bdevs": 4, 00:11:53.742 "num_base_bdevs_discovered": 3, 00:11:53.742 "num_base_bdevs_operational": 4, 00:11:53.742 "base_bdevs_list": [ 00:11:53.742 { 00:11:53.742 "name": "BaseBdev1", 00:11:53.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.742 "is_configured": false, 00:11:53.742 "data_offset": 0, 00:11:53.742 "data_size": 0 00:11:53.742 }, 00:11:53.742 { 00:11:53.742 "name": "BaseBdev2", 00:11:53.742 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:53.742 "is_configured": true, 00:11:53.742 "data_offset": 0, 00:11:53.742 "data_size": 65536 00:11:53.742 }, 00:11:53.742 { 00:11:53.742 "name": "BaseBdev3", 00:11:53.742 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:53.742 "is_configured": true, 00:11:53.742 "data_offset": 0, 00:11:53.742 "data_size": 65536 00:11:53.742 }, 00:11:53.742 { 00:11:53.742 "name": "BaseBdev4", 00:11:53.742 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:53.742 "is_configured": true, 00:11:53.742 "data_offset": 0, 00:11:53.742 "data_size": 65536 00:11:53.742 } 00:11:53.742 ] 00:11:53.742 }' 00:11:53.742 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.742 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.000 [2024-12-12 05:50:01.445419] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.000 "name": "Existed_Raid", 00:11:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.000 "strip_size_kb": 0, 00:11:54.000 "state": "configuring", 00:11:54.000 "raid_level": "raid1", 00:11:54.000 "superblock": false, 00:11:54.000 "num_base_bdevs": 4, 00:11:54.000 "num_base_bdevs_discovered": 2, 00:11:54.000 "num_base_bdevs_operational": 4, 00:11:54.000 "base_bdevs_list": [ 00:11:54.000 { 00:11:54.000 "name": "BaseBdev1", 00:11:54.000 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.000 "is_configured": false, 00:11:54.000 "data_offset": 0, 00:11:54.000 "data_size": 0 00:11:54.000 }, 00:11:54.000 { 00:11:54.000 "name": null, 00:11:54.000 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:54.000 "is_configured": false, 00:11:54.000 "data_offset": 0, 00:11:54.000 "data_size": 65536 00:11:54.000 }, 00:11:54.000 { 00:11:54.000 "name": "BaseBdev3", 00:11:54.000 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:54.000 "is_configured": true, 00:11:54.000 "data_offset": 0, 00:11:54.000 "data_size": 65536 00:11:54.000 }, 00:11:54.000 { 00:11:54.000 "name": "BaseBdev4", 00:11:54.000 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:54.000 "is_configured": true, 00:11:54.000 "data_offset": 0, 00:11:54.000 "data_size": 65536 00:11:54.000 } 00:11:54.000 ] 00:11:54.000 }' 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.000 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 [2024-12-12 05:50:01.970228] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:54.568 BaseBdev1 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.568 05:50:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 [ 00:11:54.568 { 00:11:54.568 "name": "BaseBdev1", 00:11:54.568 "aliases": [ 00:11:54.568 "0fdec77c-47f4-48ea-9bf4-a2a392565707" 00:11:54.568 ], 00:11:54.568 "product_name": "Malloc disk", 00:11:54.568 "block_size": 512, 00:11:54.568 "num_blocks": 65536, 00:11:54.568 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:54.568 "assigned_rate_limits": { 00:11:54.568 "rw_ios_per_sec": 0, 00:11:54.568 "rw_mbytes_per_sec": 0, 00:11:54.568 "r_mbytes_per_sec": 0, 00:11:54.568 "w_mbytes_per_sec": 0 00:11:54.568 }, 00:11:54.568 "claimed": true, 00:11:54.568 "claim_type": "exclusive_write", 00:11:54.568 "zoned": false, 00:11:54.568 "supported_io_types": { 00:11:54.568 "read": true, 00:11:54.568 "write": true, 00:11:54.568 "unmap": true, 00:11:54.568 "flush": true, 00:11:54.568 "reset": true, 00:11:54.568 "nvme_admin": false, 00:11:54.568 "nvme_io": false, 00:11:54.568 "nvme_io_md": false, 00:11:54.568 "write_zeroes": true, 00:11:54.568 "zcopy": true, 00:11:54.568 "get_zone_info": false, 00:11:54.568 "zone_management": false, 00:11:54.568 "zone_append": false, 00:11:54.568 "compare": false, 00:11:54.568 "compare_and_write": false, 00:11:54.568 "abort": true, 00:11:54.568 "seek_hole": false, 00:11:54.568 "seek_data": false, 00:11:54.568 "copy": true, 00:11:54.568 "nvme_iov_md": false 00:11:54.568 }, 00:11:54.568 "memory_domains": [ 00:11:54.568 { 00:11:54.568 "dma_device_id": "system", 00:11:54.568 "dma_device_type": 1 00:11:54.568 }, 00:11:54.568 { 00:11:54.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.568 "dma_device_type": 2 00:11:54.568 } 00:11:54.568 ], 00:11:54.568 "driver_specific": {} 00:11:54.568 } 00:11:54.568 ] 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.568 "name": "Existed_Raid", 00:11:54.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.568 "strip_size_kb": 0, 00:11:54.568 "state": "configuring", 00:11:54.568 "raid_level": "raid1", 00:11:54.568 "superblock": false, 00:11:54.568 "num_base_bdevs": 4, 00:11:54.568 "num_base_bdevs_discovered": 3, 00:11:54.568 "num_base_bdevs_operational": 4, 00:11:54.568 "base_bdevs_list": [ 00:11:54.568 { 00:11:54.568 "name": "BaseBdev1", 00:11:54.568 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:54.568 "is_configured": true, 00:11:54.568 "data_offset": 0, 00:11:54.568 "data_size": 65536 00:11:54.568 }, 00:11:54.568 { 00:11:54.568 "name": null, 00:11:54.568 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:54.568 "is_configured": false, 00:11:54.568 "data_offset": 0, 00:11:54.568 "data_size": 65536 00:11:54.568 }, 00:11:54.568 { 00:11:54.568 "name": "BaseBdev3", 00:11:54.568 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:54.568 "is_configured": true, 00:11:54.568 "data_offset": 0, 00:11:54.568 "data_size": 65536 00:11:54.568 }, 00:11:54.568 { 00:11:54.568 "name": "BaseBdev4", 00:11:54.568 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:54.568 "is_configured": true, 00:11:54.568 "data_offset": 0, 00:11:54.568 "data_size": 65536 00:11:54.568 } 00:11:54.568 ] 00:11:54.568 }' 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.568 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.140 [2024-12-12 05:50:02.501522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.140 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.140 "name": "Existed_Raid", 00:11:55.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.140 "strip_size_kb": 0, 00:11:55.140 "state": "configuring", 00:11:55.140 "raid_level": "raid1", 00:11:55.140 "superblock": false, 00:11:55.140 "num_base_bdevs": 4, 00:11:55.140 "num_base_bdevs_discovered": 2, 00:11:55.140 "num_base_bdevs_operational": 4, 00:11:55.140 "base_bdevs_list": [ 00:11:55.140 { 00:11:55.140 "name": "BaseBdev1", 00:11:55.140 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:55.140 "is_configured": true, 00:11:55.140 "data_offset": 0, 00:11:55.140 "data_size": 65536 00:11:55.140 }, 00:11:55.140 { 00:11:55.140 "name": null, 00:11:55.140 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:55.140 "is_configured": false, 00:11:55.140 "data_offset": 0, 00:11:55.140 "data_size": 65536 00:11:55.140 }, 00:11:55.140 { 00:11:55.140 "name": null, 00:11:55.140 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:55.140 "is_configured": false, 00:11:55.140 "data_offset": 0, 00:11:55.140 "data_size": 65536 00:11:55.140 }, 00:11:55.140 { 00:11:55.140 "name": "BaseBdev4", 00:11:55.140 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:55.140 "is_configured": true, 00:11:55.141 "data_offset": 0, 00:11:55.141 "data_size": 65536 00:11:55.141 } 00:11:55.141 ] 00:11:55.141 }' 00:11:55.141 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.141 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.405 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.405 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.405 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.405 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.664 [2024-12-12 05:50:02.968671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.664 05:50:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.664 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.664 "name": "Existed_Raid", 00:11:55.664 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.664 "strip_size_kb": 0, 00:11:55.664 "state": "configuring", 00:11:55.664 "raid_level": "raid1", 00:11:55.664 "superblock": false, 00:11:55.664 "num_base_bdevs": 4, 00:11:55.664 "num_base_bdevs_discovered": 3, 00:11:55.664 "num_base_bdevs_operational": 4, 00:11:55.664 "base_bdevs_list": [ 00:11:55.664 { 00:11:55.664 "name": "BaseBdev1", 00:11:55.664 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:55.664 "is_configured": true, 00:11:55.664 "data_offset": 0, 00:11:55.664 "data_size": 65536 00:11:55.664 }, 00:11:55.664 { 00:11:55.664 "name": null, 00:11:55.664 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:55.664 "is_configured": false, 00:11:55.664 "data_offset": 0, 00:11:55.664 "data_size": 65536 00:11:55.664 }, 00:11:55.664 { 00:11:55.664 "name": "BaseBdev3", 00:11:55.664 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:55.664 "is_configured": true, 00:11:55.664 "data_offset": 0, 00:11:55.664 "data_size": 65536 00:11:55.664 }, 00:11:55.664 { 00:11:55.664 "name": "BaseBdev4", 00:11:55.664 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:55.664 "is_configured": true, 00:11:55.664 "data_offset": 0, 00:11:55.664 "data_size": 65536 00:11:55.664 } 00:11:55.664 ] 00:11:55.664 }' 00:11:55.664 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.664 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:55.923 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.923 [2024-12-12 05:50:03.411977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.181 "name": "Existed_Raid", 00:11:56.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.181 "strip_size_kb": 0, 00:11:56.181 "state": "configuring", 00:11:56.181 "raid_level": "raid1", 00:11:56.181 "superblock": false, 00:11:56.181 "num_base_bdevs": 4, 00:11:56.181 "num_base_bdevs_discovered": 2, 00:11:56.181 "num_base_bdevs_operational": 4, 00:11:56.181 "base_bdevs_list": [ 00:11:56.181 { 00:11:56.181 "name": null, 00:11:56.181 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:56.181 "is_configured": false, 00:11:56.181 "data_offset": 0, 00:11:56.181 "data_size": 65536 00:11:56.181 }, 00:11:56.181 { 00:11:56.181 "name": null, 00:11:56.181 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:56.181 "is_configured": false, 00:11:56.181 "data_offset": 0, 00:11:56.181 "data_size": 65536 00:11:56.181 }, 00:11:56.181 { 00:11:56.181 "name": "BaseBdev3", 00:11:56.181 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:56.181 "is_configured": true, 00:11:56.181 "data_offset": 0, 00:11:56.181 "data_size": 65536 00:11:56.181 }, 00:11:56.181 { 00:11:56.181 "name": "BaseBdev4", 00:11:56.181 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:56.181 "is_configured": true, 00:11:56.181 "data_offset": 0, 00:11:56.181 "data_size": 65536 00:11:56.181 } 00:11:56.181 ] 00:11:56.181 }' 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.181 05:50:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.749 [2024-12-12 05:50:04.034296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.749 "name": "Existed_Raid", 00:11:56.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.749 "strip_size_kb": 0, 00:11:56.749 "state": "configuring", 00:11:56.749 "raid_level": "raid1", 00:11:56.749 "superblock": false, 00:11:56.749 "num_base_bdevs": 4, 00:11:56.749 "num_base_bdevs_discovered": 3, 00:11:56.749 "num_base_bdevs_operational": 4, 00:11:56.749 "base_bdevs_list": [ 00:11:56.749 { 00:11:56.749 "name": null, 00:11:56.749 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:56.749 "is_configured": false, 00:11:56.749 "data_offset": 0, 00:11:56.749 "data_size": 65536 00:11:56.749 }, 00:11:56.749 { 00:11:56.749 "name": "BaseBdev2", 00:11:56.749 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:56.749 "is_configured": true, 00:11:56.749 "data_offset": 0, 00:11:56.749 "data_size": 65536 00:11:56.749 }, 00:11:56.749 { 00:11:56.749 "name": "BaseBdev3", 00:11:56.749 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:56.749 "is_configured": true, 00:11:56.749 "data_offset": 0, 00:11:56.749 "data_size": 65536 00:11:56.749 }, 00:11:56.749 { 00:11:56.749 "name": "BaseBdev4", 00:11:56.749 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:56.749 "is_configured": true, 00:11:56.749 "data_offset": 0, 00:11:56.749 "data_size": 65536 00:11:56.749 } 00:11:56.749 ] 00:11:56.749 }' 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.749 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.008 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.008 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:57.008 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.008 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.008 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0fdec77c-47f4-48ea-9bf4-a2a392565707 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.267 [2024-12-12 05:50:04.635521] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:57.267 [2024-12-12 05:50:04.635646] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:57.267 [2024-12-12 05:50:04.635677] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:57.267 [2024-12-12 05:50:04.636024] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:57.267 [2024-12-12 05:50:04.636252] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:57.267 [2024-12-12 05:50:04.636295] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:57.267 [2024-12-12 05:50:04.636635] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.267 NewBaseBdev 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.267 [ 00:11:57.267 { 00:11:57.267 "name": "NewBaseBdev", 00:11:57.267 "aliases": [ 00:11:57.267 "0fdec77c-47f4-48ea-9bf4-a2a392565707" 00:11:57.267 ], 00:11:57.267 "product_name": "Malloc disk", 00:11:57.267 "block_size": 512, 00:11:57.267 "num_blocks": 65536, 00:11:57.267 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:57.267 "assigned_rate_limits": { 00:11:57.267 "rw_ios_per_sec": 0, 00:11:57.267 "rw_mbytes_per_sec": 0, 00:11:57.267 "r_mbytes_per_sec": 0, 00:11:57.267 "w_mbytes_per_sec": 0 00:11:57.267 }, 00:11:57.267 "claimed": true, 00:11:57.267 "claim_type": "exclusive_write", 00:11:57.267 "zoned": false, 00:11:57.267 "supported_io_types": { 00:11:57.267 "read": true, 00:11:57.267 "write": true, 00:11:57.267 "unmap": true, 00:11:57.267 "flush": true, 00:11:57.267 "reset": true, 00:11:57.267 "nvme_admin": false, 00:11:57.267 "nvme_io": false, 00:11:57.267 "nvme_io_md": false, 00:11:57.267 "write_zeroes": true, 00:11:57.267 "zcopy": true, 00:11:57.267 "get_zone_info": false, 00:11:57.267 "zone_management": false, 00:11:57.267 "zone_append": false, 00:11:57.267 "compare": false, 00:11:57.267 "compare_and_write": false, 00:11:57.267 "abort": true, 00:11:57.267 "seek_hole": false, 00:11:57.267 "seek_data": false, 00:11:57.267 "copy": true, 00:11:57.267 "nvme_iov_md": false 00:11:57.267 }, 00:11:57.267 "memory_domains": [ 00:11:57.267 { 00:11:57.267 "dma_device_id": "system", 00:11:57.267 "dma_device_type": 1 00:11:57.267 }, 00:11:57.267 { 00:11:57.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.267 "dma_device_type": 2 00:11:57.267 } 00:11:57.267 ], 00:11:57.267 "driver_specific": {} 00:11:57.267 } 00:11:57.267 ] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.267 "name": "Existed_Raid", 00:11:57.267 "uuid": "3864fe6d-8e17-472a-b768-172cfc868ef6", 00:11:57.267 "strip_size_kb": 0, 00:11:57.267 "state": "online", 00:11:57.267 "raid_level": "raid1", 00:11:57.267 "superblock": false, 00:11:57.267 "num_base_bdevs": 4, 00:11:57.267 "num_base_bdevs_discovered": 4, 00:11:57.267 "num_base_bdevs_operational": 4, 00:11:57.267 "base_bdevs_list": [ 00:11:57.267 { 00:11:57.267 "name": "NewBaseBdev", 00:11:57.267 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:57.267 "is_configured": true, 00:11:57.267 "data_offset": 0, 00:11:57.267 "data_size": 65536 00:11:57.267 }, 00:11:57.267 { 00:11:57.267 "name": "BaseBdev2", 00:11:57.267 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:57.267 "is_configured": true, 00:11:57.267 "data_offset": 0, 00:11:57.267 "data_size": 65536 00:11:57.267 }, 00:11:57.267 { 00:11:57.267 "name": "BaseBdev3", 00:11:57.267 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:57.267 "is_configured": true, 00:11:57.267 "data_offset": 0, 00:11:57.267 "data_size": 65536 00:11:57.267 }, 00:11:57.267 { 00:11:57.267 "name": "BaseBdev4", 00:11:57.267 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:57.267 "is_configured": true, 00:11:57.267 "data_offset": 0, 00:11:57.267 "data_size": 65536 00:11:57.267 } 00:11:57.267 ] 00:11:57.267 }' 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.267 05:50:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.834 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.835 [2024-12-12 05:50:05.107137] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.835 "name": "Existed_Raid", 00:11:57.835 "aliases": [ 00:11:57.835 "3864fe6d-8e17-472a-b768-172cfc868ef6" 00:11:57.835 ], 00:11:57.835 "product_name": "Raid Volume", 00:11:57.835 "block_size": 512, 00:11:57.835 "num_blocks": 65536, 00:11:57.835 "uuid": "3864fe6d-8e17-472a-b768-172cfc868ef6", 00:11:57.835 "assigned_rate_limits": { 00:11:57.835 "rw_ios_per_sec": 0, 00:11:57.835 "rw_mbytes_per_sec": 0, 00:11:57.835 "r_mbytes_per_sec": 0, 00:11:57.835 "w_mbytes_per_sec": 0 00:11:57.835 }, 00:11:57.835 "claimed": false, 00:11:57.835 "zoned": false, 00:11:57.835 "supported_io_types": { 00:11:57.835 "read": true, 00:11:57.835 "write": true, 00:11:57.835 "unmap": false, 00:11:57.835 "flush": false, 00:11:57.835 "reset": true, 00:11:57.835 "nvme_admin": false, 00:11:57.835 "nvme_io": false, 00:11:57.835 "nvme_io_md": false, 00:11:57.835 "write_zeroes": true, 00:11:57.835 "zcopy": false, 00:11:57.835 "get_zone_info": false, 00:11:57.835 "zone_management": false, 00:11:57.835 "zone_append": false, 00:11:57.835 "compare": false, 00:11:57.835 "compare_and_write": false, 00:11:57.835 "abort": false, 00:11:57.835 "seek_hole": false, 00:11:57.835 "seek_data": false, 00:11:57.835 "copy": false, 00:11:57.835 "nvme_iov_md": false 00:11:57.835 }, 00:11:57.835 "memory_domains": [ 00:11:57.835 { 00:11:57.835 "dma_device_id": "system", 00:11:57.835 "dma_device_type": 1 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.835 "dma_device_type": 2 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "system", 00:11:57.835 "dma_device_type": 1 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.835 "dma_device_type": 2 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "system", 00:11:57.835 "dma_device_type": 1 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.835 "dma_device_type": 2 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "system", 00:11:57.835 "dma_device_type": 1 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.835 "dma_device_type": 2 00:11:57.835 } 00:11:57.835 ], 00:11:57.835 "driver_specific": { 00:11:57.835 "raid": { 00:11:57.835 "uuid": "3864fe6d-8e17-472a-b768-172cfc868ef6", 00:11:57.835 "strip_size_kb": 0, 00:11:57.835 "state": "online", 00:11:57.835 "raid_level": "raid1", 00:11:57.835 "superblock": false, 00:11:57.835 "num_base_bdevs": 4, 00:11:57.835 "num_base_bdevs_discovered": 4, 00:11:57.835 "num_base_bdevs_operational": 4, 00:11:57.835 "base_bdevs_list": [ 00:11:57.835 { 00:11:57.835 "name": "NewBaseBdev", 00:11:57.835 "uuid": "0fdec77c-47f4-48ea-9bf4-a2a392565707", 00:11:57.835 "is_configured": true, 00:11:57.835 "data_offset": 0, 00:11:57.835 "data_size": 65536 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "name": "BaseBdev2", 00:11:57.835 "uuid": "5bdbd0fe-caff-42b1-8510-1deff72e7662", 00:11:57.835 "is_configured": true, 00:11:57.835 "data_offset": 0, 00:11:57.835 "data_size": 65536 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "name": "BaseBdev3", 00:11:57.835 "uuid": "654645b3-e379-45e3-a5ff-7aad92f69ee8", 00:11:57.835 "is_configured": true, 00:11:57.835 "data_offset": 0, 00:11:57.835 "data_size": 65536 00:11:57.835 }, 00:11:57.835 { 00:11:57.835 "name": "BaseBdev4", 00:11:57.835 "uuid": "249f94ac-4cbf-4bc7-bf43-06d25d19ba1d", 00:11:57.835 "is_configured": true, 00:11:57.835 "data_offset": 0, 00:11:57.835 "data_size": 65536 00:11:57.835 } 00:11:57.835 ] 00:11:57.835 } 00:11:57.835 } 00:11:57.835 }' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:57.835 BaseBdev2 00:11:57.835 BaseBdev3 00:11:57.835 BaseBdev4' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.835 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.094 [2024-12-12 05:50:05.438215] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:58.094 [2024-12-12 05:50:05.438337] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:58.094 [2024-12-12 05:50:05.438469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:58.094 [2024-12-12 05:50:05.438831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:58.094 [2024-12-12 05:50:05.438851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 74077 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 74077 ']' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 74077 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74077 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:58.094 killing process with pid 74077 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74077' 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 74077 00:11:58.094 [2024-12-12 05:50:05.488636] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:58.094 05:50:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 74077 00:11:58.661 [2024-12-12 05:50:05.922276] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:00.037 ************************************ 00:12:00.037 END TEST raid_state_function_test 00:12:00.037 ************************************ 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:00.037 00:12:00.037 real 0m11.761s 00:12:00.037 user 0m18.458s 00:12:00.037 sys 0m2.161s 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.037 05:50:07 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:00.037 05:50:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:00.037 05:50:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.037 05:50:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:00.037 ************************************ 00:12:00.037 START TEST raid_state_function_test_sb 00:12:00.037 ************************************ 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:00.037 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:00.038 Process raid pid: 74757 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74757 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74757' 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74757 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 74757 ']' 00:12:00.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.038 05:50:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.038 [2024-12-12 05:50:07.323840] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:00.038 [2024-12-12 05:50:07.324043] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.038 [2024-12-12 05:50:07.496876] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:00.296 [2024-12-12 05:50:07.640102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.554 [2024-12-12 05:50:07.879604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.554 [2024-12-12 05:50:07.879652] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.813 [2024-12-12 05:50:08.156492] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:00.813 [2024-12-12 05:50:08.156623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:00.813 [2024-12-12 05:50:08.156661] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:00.813 [2024-12-12 05:50:08.156690] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:00.813 [2024-12-12 05:50:08.156759] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:00.813 [2024-12-12 05:50:08.156805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:00.813 [2024-12-12 05:50:08.156845] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:00.813 [2024-12-12 05:50:08.156890] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.813 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.813 "name": "Existed_Raid", 00:12:00.813 "uuid": "bdd9911b-3fb2-42e7-ad2f-4d06e75df241", 00:12:00.813 "strip_size_kb": 0, 00:12:00.813 "state": "configuring", 00:12:00.813 "raid_level": "raid1", 00:12:00.813 "superblock": true, 00:12:00.813 "num_base_bdevs": 4, 00:12:00.813 "num_base_bdevs_discovered": 0, 00:12:00.813 "num_base_bdevs_operational": 4, 00:12:00.813 "base_bdevs_list": [ 00:12:00.813 { 00:12:00.813 "name": "BaseBdev1", 00:12:00.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.813 "is_configured": false, 00:12:00.813 "data_offset": 0, 00:12:00.813 "data_size": 0 00:12:00.813 }, 00:12:00.813 { 00:12:00.813 "name": "BaseBdev2", 00:12:00.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.814 "is_configured": false, 00:12:00.814 "data_offset": 0, 00:12:00.814 "data_size": 0 00:12:00.814 }, 00:12:00.814 { 00:12:00.814 "name": "BaseBdev3", 00:12:00.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.814 "is_configured": false, 00:12:00.814 "data_offset": 0, 00:12:00.814 "data_size": 0 00:12:00.814 }, 00:12:00.814 { 00:12:00.814 "name": "BaseBdev4", 00:12:00.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.814 "is_configured": false, 00:12:00.814 "data_offset": 0, 00:12:00.814 "data_size": 0 00:12:00.814 } 00:12:00.814 ] 00:12:00.814 }' 00:12:00.814 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.814 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 [2024-12-12 05:50:08.623664] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.382 [2024-12-12 05:50:08.623795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 [2024-12-12 05:50:08.635608] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.382 [2024-12-12 05:50:08.635694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.382 [2024-12-12 05:50:08.635726] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.382 [2024-12-12 05:50:08.635754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.382 [2024-12-12 05:50:08.635777] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.382 [2024-12-12 05:50:08.635802] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.382 [2024-12-12 05:50:08.635849] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.382 [2024-12-12 05:50:08.635878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 [2024-12-12 05:50:08.682835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.382 BaseBdev1 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 [ 00:12:01.382 { 00:12:01.382 "name": "BaseBdev1", 00:12:01.382 "aliases": [ 00:12:01.382 "dab7e9e5-48b1-4811-80be-af6d411c53af" 00:12:01.382 ], 00:12:01.382 "product_name": "Malloc disk", 00:12:01.382 "block_size": 512, 00:12:01.382 "num_blocks": 65536, 00:12:01.382 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:01.382 "assigned_rate_limits": { 00:12:01.382 "rw_ios_per_sec": 0, 00:12:01.382 "rw_mbytes_per_sec": 0, 00:12:01.382 "r_mbytes_per_sec": 0, 00:12:01.382 "w_mbytes_per_sec": 0 00:12:01.382 }, 00:12:01.382 "claimed": true, 00:12:01.382 "claim_type": "exclusive_write", 00:12:01.382 "zoned": false, 00:12:01.382 "supported_io_types": { 00:12:01.382 "read": true, 00:12:01.382 "write": true, 00:12:01.382 "unmap": true, 00:12:01.382 "flush": true, 00:12:01.382 "reset": true, 00:12:01.382 "nvme_admin": false, 00:12:01.382 "nvme_io": false, 00:12:01.382 "nvme_io_md": false, 00:12:01.382 "write_zeroes": true, 00:12:01.382 "zcopy": true, 00:12:01.382 "get_zone_info": false, 00:12:01.382 "zone_management": false, 00:12:01.382 "zone_append": false, 00:12:01.382 "compare": false, 00:12:01.382 "compare_and_write": false, 00:12:01.382 "abort": true, 00:12:01.382 "seek_hole": false, 00:12:01.382 "seek_data": false, 00:12:01.382 "copy": true, 00:12:01.382 "nvme_iov_md": false 00:12:01.382 }, 00:12:01.382 "memory_domains": [ 00:12:01.382 { 00:12:01.382 "dma_device_id": "system", 00:12:01.382 "dma_device_type": 1 00:12:01.382 }, 00:12:01.382 { 00:12:01.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.382 "dma_device_type": 2 00:12:01.382 } 00:12:01.382 ], 00:12:01.382 "driver_specific": {} 00:12:01.382 } 00:12:01.382 ] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.382 "name": "Existed_Raid", 00:12:01.382 "uuid": "eda4ba43-89f9-4041-8d50-9461da91cc6d", 00:12:01.382 "strip_size_kb": 0, 00:12:01.382 "state": "configuring", 00:12:01.382 "raid_level": "raid1", 00:12:01.382 "superblock": true, 00:12:01.382 "num_base_bdevs": 4, 00:12:01.382 "num_base_bdevs_discovered": 1, 00:12:01.382 "num_base_bdevs_operational": 4, 00:12:01.382 "base_bdevs_list": [ 00:12:01.382 { 00:12:01.382 "name": "BaseBdev1", 00:12:01.382 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:01.382 "is_configured": true, 00:12:01.382 "data_offset": 2048, 00:12:01.382 "data_size": 63488 00:12:01.382 }, 00:12:01.382 { 00:12:01.382 "name": "BaseBdev2", 00:12:01.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.382 "is_configured": false, 00:12:01.382 "data_offset": 0, 00:12:01.382 "data_size": 0 00:12:01.382 }, 00:12:01.382 { 00:12:01.382 "name": "BaseBdev3", 00:12:01.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.382 "is_configured": false, 00:12:01.382 "data_offset": 0, 00:12:01.382 "data_size": 0 00:12:01.382 }, 00:12:01.382 { 00:12:01.382 "name": "BaseBdev4", 00:12:01.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.382 "is_configured": false, 00:12:01.382 "data_offset": 0, 00:12:01.382 "data_size": 0 00:12:01.382 } 00:12:01.382 ] 00:12:01.382 }' 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.382 05:50:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.641 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:01.641 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.641 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.641 [2024-12-12 05:50:09.162125] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:01.641 [2024-12-12 05:50:09.162249] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 [2024-12-12 05:50:09.174170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.900 [2024-12-12 05:50:09.176159] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.900 [2024-12-12 05:50:09.176270] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.900 [2024-12-12 05:50:09.176308] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.900 [2024-12-12 05:50:09.176338] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.900 [2024-12-12 05:50:09.176362] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.900 [2024-12-12 05:50:09.176450] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.900 "name": "Existed_Raid", 00:12:01.900 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:01.900 "strip_size_kb": 0, 00:12:01.900 "state": "configuring", 00:12:01.900 "raid_level": "raid1", 00:12:01.900 "superblock": true, 00:12:01.900 "num_base_bdevs": 4, 00:12:01.900 "num_base_bdevs_discovered": 1, 00:12:01.900 "num_base_bdevs_operational": 4, 00:12:01.900 "base_bdevs_list": [ 00:12:01.900 { 00:12:01.900 "name": "BaseBdev1", 00:12:01.900 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:01.900 "is_configured": true, 00:12:01.900 "data_offset": 2048, 00:12:01.900 "data_size": 63488 00:12:01.900 }, 00:12:01.900 { 00:12:01.900 "name": "BaseBdev2", 00:12:01.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.900 "is_configured": false, 00:12:01.900 "data_offset": 0, 00:12:01.900 "data_size": 0 00:12:01.900 }, 00:12:01.900 { 00:12:01.900 "name": "BaseBdev3", 00:12:01.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.900 "is_configured": false, 00:12:01.900 "data_offset": 0, 00:12:01.900 "data_size": 0 00:12:01.900 }, 00:12:01.900 { 00:12:01.900 "name": "BaseBdev4", 00:12:01.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.900 "is_configured": false, 00:12:01.900 "data_offset": 0, 00:12:01.900 "data_size": 0 00:12:01.900 } 00:12:01.900 ] 00:12:01.900 }' 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.900 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.159 [2024-12-12 05:50:09.596876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.159 BaseBdev2 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.159 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.159 [ 00:12:02.159 { 00:12:02.159 "name": "BaseBdev2", 00:12:02.159 "aliases": [ 00:12:02.159 "a1c60b05-aa4d-490d-b798-e014242926fb" 00:12:02.159 ], 00:12:02.159 "product_name": "Malloc disk", 00:12:02.159 "block_size": 512, 00:12:02.159 "num_blocks": 65536, 00:12:02.159 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:02.159 "assigned_rate_limits": { 00:12:02.159 "rw_ios_per_sec": 0, 00:12:02.159 "rw_mbytes_per_sec": 0, 00:12:02.160 "r_mbytes_per_sec": 0, 00:12:02.160 "w_mbytes_per_sec": 0 00:12:02.160 }, 00:12:02.160 "claimed": true, 00:12:02.160 "claim_type": "exclusive_write", 00:12:02.160 "zoned": false, 00:12:02.160 "supported_io_types": { 00:12:02.160 "read": true, 00:12:02.160 "write": true, 00:12:02.160 "unmap": true, 00:12:02.160 "flush": true, 00:12:02.160 "reset": true, 00:12:02.160 "nvme_admin": false, 00:12:02.160 "nvme_io": false, 00:12:02.160 "nvme_io_md": false, 00:12:02.160 "write_zeroes": true, 00:12:02.160 "zcopy": true, 00:12:02.160 "get_zone_info": false, 00:12:02.160 "zone_management": false, 00:12:02.160 "zone_append": false, 00:12:02.160 "compare": false, 00:12:02.160 "compare_and_write": false, 00:12:02.160 "abort": true, 00:12:02.160 "seek_hole": false, 00:12:02.160 "seek_data": false, 00:12:02.160 "copy": true, 00:12:02.160 "nvme_iov_md": false 00:12:02.160 }, 00:12:02.160 "memory_domains": [ 00:12:02.160 { 00:12:02.160 "dma_device_id": "system", 00:12:02.160 "dma_device_type": 1 00:12:02.160 }, 00:12:02.160 { 00:12:02.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.160 "dma_device_type": 2 00:12:02.160 } 00:12:02.160 ], 00:12:02.160 "driver_specific": {} 00:12:02.160 } 00:12:02.160 ] 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.160 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.419 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.419 "name": "Existed_Raid", 00:12:02.419 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:02.419 "strip_size_kb": 0, 00:12:02.419 "state": "configuring", 00:12:02.419 "raid_level": "raid1", 00:12:02.419 "superblock": true, 00:12:02.419 "num_base_bdevs": 4, 00:12:02.419 "num_base_bdevs_discovered": 2, 00:12:02.419 "num_base_bdevs_operational": 4, 00:12:02.419 "base_bdevs_list": [ 00:12:02.419 { 00:12:02.419 "name": "BaseBdev1", 00:12:02.419 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:02.419 "is_configured": true, 00:12:02.419 "data_offset": 2048, 00:12:02.419 "data_size": 63488 00:12:02.419 }, 00:12:02.419 { 00:12:02.419 "name": "BaseBdev2", 00:12:02.419 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:02.419 "is_configured": true, 00:12:02.419 "data_offset": 2048, 00:12:02.419 "data_size": 63488 00:12:02.419 }, 00:12:02.419 { 00:12:02.419 "name": "BaseBdev3", 00:12:02.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.419 "is_configured": false, 00:12:02.419 "data_offset": 0, 00:12:02.419 "data_size": 0 00:12:02.419 }, 00:12:02.419 { 00:12:02.419 "name": "BaseBdev4", 00:12:02.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.419 "is_configured": false, 00:12:02.419 "data_offset": 0, 00:12:02.419 "data_size": 0 00:12:02.419 } 00:12:02.419 ] 00:12:02.419 }' 00:12:02.419 05:50:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.419 05:50:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.685 [2024-12-12 05:50:10.120569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:02.685 BaseBdev3 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.685 [ 00:12:02.685 { 00:12:02.685 "name": "BaseBdev3", 00:12:02.685 "aliases": [ 00:12:02.685 "35388644-af5e-4356-b565-974213e817ef" 00:12:02.685 ], 00:12:02.685 "product_name": "Malloc disk", 00:12:02.685 "block_size": 512, 00:12:02.685 "num_blocks": 65536, 00:12:02.685 "uuid": "35388644-af5e-4356-b565-974213e817ef", 00:12:02.685 "assigned_rate_limits": { 00:12:02.685 "rw_ios_per_sec": 0, 00:12:02.685 "rw_mbytes_per_sec": 0, 00:12:02.685 "r_mbytes_per_sec": 0, 00:12:02.685 "w_mbytes_per_sec": 0 00:12:02.685 }, 00:12:02.685 "claimed": true, 00:12:02.685 "claim_type": "exclusive_write", 00:12:02.685 "zoned": false, 00:12:02.685 "supported_io_types": { 00:12:02.685 "read": true, 00:12:02.685 "write": true, 00:12:02.685 "unmap": true, 00:12:02.685 "flush": true, 00:12:02.685 "reset": true, 00:12:02.685 "nvme_admin": false, 00:12:02.685 "nvme_io": false, 00:12:02.685 "nvme_io_md": false, 00:12:02.685 "write_zeroes": true, 00:12:02.685 "zcopy": true, 00:12:02.685 "get_zone_info": false, 00:12:02.685 "zone_management": false, 00:12:02.685 "zone_append": false, 00:12:02.685 "compare": false, 00:12:02.685 "compare_and_write": false, 00:12:02.685 "abort": true, 00:12:02.685 "seek_hole": false, 00:12:02.685 "seek_data": false, 00:12:02.685 "copy": true, 00:12:02.685 "nvme_iov_md": false 00:12:02.685 }, 00:12:02.685 "memory_domains": [ 00:12:02.685 { 00:12:02.685 "dma_device_id": "system", 00:12:02.685 "dma_device_type": 1 00:12:02.685 }, 00:12:02.685 { 00:12:02.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.685 "dma_device_type": 2 00:12:02.685 } 00:12:02.685 ], 00:12:02.685 "driver_specific": {} 00:12:02.685 } 00:12:02.685 ] 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.685 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.965 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.965 "name": "Existed_Raid", 00:12:02.965 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:02.965 "strip_size_kb": 0, 00:12:02.965 "state": "configuring", 00:12:02.965 "raid_level": "raid1", 00:12:02.965 "superblock": true, 00:12:02.965 "num_base_bdevs": 4, 00:12:02.965 "num_base_bdevs_discovered": 3, 00:12:02.965 "num_base_bdevs_operational": 4, 00:12:02.965 "base_bdevs_list": [ 00:12:02.965 { 00:12:02.965 "name": "BaseBdev1", 00:12:02.965 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:02.965 "is_configured": true, 00:12:02.965 "data_offset": 2048, 00:12:02.965 "data_size": 63488 00:12:02.965 }, 00:12:02.965 { 00:12:02.965 "name": "BaseBdev2", 00:12:02.965 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:02.965 "is_configured": true, 00:12:02.965 "data_offset": 2048, 00:12:02.965 "data_size": 63488 00:12:02.965 }, 00:12:02.965 { 00:12:02.965 "name": "BaseBdev3", 00:12:02.965 "uuid": "35388644-af5e-4356-b565-974213e817ef", 00:12:02.965 "is_configured": true, 00:12:02.965 "data_offset": 2048, 00:12:02.965 "data_size": 63488 00:12:02.965 }, 00:12:02.965 { 00:12:02.965 "name": "BaseBdev4", 00:12:02.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.965 "is_configured": false, 00:12:02.965 "data_offset": 0, 00:12:02.965 "data_size": 0 00:12:02.965 } 00:12:02.965 ] 00:12:02.965 }' 00:12:02.965 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.965 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 [2024-12-12 05:50:10.654473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:03.234 [2024-12-12 05:50:10.654847] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:03.234 [2024-12-12 05:50:10.654872] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:03.234 [2024-12-12 05:50:10.655234] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:03.234 BaseBdev4 00:12:03.234 [2024-12-12 05:50:10.655466] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:03.234 [2024-12-12 05:50:10.655492] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:03.234 [2024-12-12 05:50:10.655730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 [ 00:12:03.234 { 00:12:03.234 "name": "BaseBdev4", 00:12:03.234 "aliases": [ 00:12:03.234 "2c62a229-1fb7-4693-bd19-18a8e1c17a94" 00:12:03.234 ], 00:12:03.234 "product_name": "Malloc disk", 00:12:03.234 "block_size": 512, 00:12:03.234 "num_blocks": 65536, 00:12:03.234 "uuid": "2c62a229-1fb7-4693-bd19-18a8e1c17a94", 00:12:03.234 "assigned_rate_limits": { 00:12:03.234 "rw_ios_per_sec": 0, 00:12:03.234 "rw_mbytes_per_sec": 0, 00:12:03.234 "r_mbytes_per_sec": 0, 00:12:03.234 "w_mbytes_per_sec": 0 00:12:03.234 }, 00:12:03.234 "claimed": true, 00:12:03.234 "claim_type": "exclusive_write", 00:12:03.234 "zoned": false, 00:12:03.234 "supported_io_types": { 00:12:03.234 "read": true, 00:12:03.234 "write": true, 00:12:03.234 "unmap": true, 00:12:03.234 "flush": true, 00:12:03.234 "reset": true, 00:12:03.234 "nvme_admin": false, 00:12:03.234 "nvme_io": false, 00:12:03.234 "nvme_io_md": false, 00:12:03.234 "write_zeroes": true, 00:12:03.234 "zcopy": true, 00:12:03.234 "get_zone_info": false, 00:12:03.234 "zone_management": false, 00:12:03.234 "zone_append": false, 00:12:03.234 "compare": false, 00:12:03.234 "compare_and_write": false, 00:12:03.234 "abort": true, 00:12:03.234 "seek_hole": false, 00:12:03.234 "seek_data": false, 00:12:03.234 "copy": true, 00:12:03.234 "nvme_iov_md": false 00:12:03.234 }, 00:12:03.234 "memory_domains": [ 00:12:03.234 { 00:12:03.234 "dma_device_id": "system", 00:12:03.234 "dma_device_type": 1 00:12:03.234 }, 00:12:03.234 { 00:12:03.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.234 "dma_device_type": 2 00:12:03.234 } 00:12:03.234 ], 00:12:03.234 "driver_specific": {} 00:12:03.234 } 00:12:03.234 ] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.234 "name": "Existed_Raid", 00:12:03.234 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:03.234 "strip_size_kb": 0, 00:12:03.234 "state": "online", 00:12:03.234 "raid_level": "raid1", 00:12:03.234 "superblock": true, 00:12:03.234 "num_base_bdevs": 4, 00:12:03.234 "num_base_bdevs_discovered": 4, 00:12:03.234 "num_base_bdevs_operational": 4, 00:12:03.234 "base_bdevs_list": [ 00:12:03.234 { 00:12:03.234 "name": "BaseBdev1", 00:12:03.234 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:03.234 "is_configured": true, 00:12:03.234 "data_offset": 2048, 00:12:03.234 "data_size": 63488 00:12:03.234 }, 00:12:03.234 { 00:12:03.234 "name": "BaseBdev2", 00:12:03.234 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:03.234 "is_configured": true, 00:12:03.234 "data_offset": 2048, 00:12:03.234 "data_size": 63488 00:12:03.234 }, 00:12:03.234 { 00:12:03.234 "name": "BaseBdev3", 00:12:03.234 "uuid": "35388644-af5e-4356-b565-974213e817ef", 00:12:03.234 "is_configured": true, 00:12:03.234 "data_offset": 2048, 00:12:03.234 "data_size": 63488 00:12:03.234 }, 00:12:03.234 { 00:12:03.234 "name": "BaseBdev4", 00:12:03.234 "uuid": "2c62a229-1fb7-4693-bd19-18a8e1c17a94", 00:12:03.234 "is_configured": true, 00:12:03.234 "data_offset": 2048, 00:12:03.234 "data_size": 63488 00:12:03.234 } 00:12:03.234 ] 00:12:03.234 }' 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.234 05:50:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.802 [2024-12-12 05:50:11.138202] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.802 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.802 "name": "Existed_Raid", 00:12:03.802 "aliases": [ 00:12:03.802 "08824a7d-442a-4198-8d4d-491d6b6011ec" 00:12:03.802 ], 00:12:03.802 "product_name": "Raid Volume", 00:12:03.802 "block_size": 512, 00:12:03.802 "num_blocks": 63488, 00:12:03.802 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:03.802 "assigned_rate_limits": { 00:12:03.802 "rw_ios_per_sec": 0, 00:12:03.802 "rw_mbytes_per_sec": 0, 00:12:03.802 "r_mbytes_per_sec": 0, 00:12:03.802 "w_mbytes_per_sec": 0 00:12:03.802 }, 00:12:03.802 "claimed": false, 00:12:03.802 "zoned": false, 00:12:03.802 "supported_io_types": { 00:12:03.802 "read": true, 00:12:03.802 "write": true, 00:12:03.802 "unmap": false, 00:12:03.802 "flush": false, 00:12:03.802 "reset": true, 00:12:03.802 "nvme_admin": false, 00:12:03.802 "nvme_io": false, 00:12:03.802 "nvme_io_md": false, 00:12:03.802 "write_zeroes": true, 00:12:03.802 "zcopy": false, 00:12:03.802 "get_zone_info": false, 00:12:03.802 "zone_management": false, 00:12:03.802 "zone_append": false, 00:12:03.802 "compare": false, 00:12:03.802 "compare_and_write": false, 00:12:03.802 "abort": false, 00:12:03.802 "seek_hole": false, 00:12:03.802 "seek_data": false, 00:12:03.802 "copy": false, 00:12:03.802 "nvme_iov_md": false 00:12:03.802 }, 00:12:03.802 "memory_domains": [ 00:12:03.802 { 00:12:03.802 "dma_device_id": "system", 00:12:03.802 "dma_device_type": 1 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.802 "dma_device_type": 2 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "system", 00:12:03.802 "dma_device_type": 1 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.802 "dma_device_type": 2 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "system", 00:12:03.802 "dma_device_type": 1 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.802 "dma_device_type": 2 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "system", 00:12:03.802 "dma_device_type": 1 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.802 "dma_device_type": 2 00:12:03.802 } 00:12:03.802 ], 00:12:03.802 "driver_specific": { 00:12:03.802 "raid": { 00:12:03.802 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:03.802 "strip_size_kb": 0, 00:12:03.802 "state": "online", 00:12:03.802 "raid_level": "raid1", 00:12:03.802 "superblock": true, 00:12:03.802 "num_base_bdevs": 4, 00:12:03.802 "num_base_bdevs_discovered": 4, 00:12:03.802 "num_base_bdevs_operational": 4, 00:12:03.802 "base_bdevs_list": [ 00:12:03.802 { 00:12:03.802 "name": "BaseBdev1", 00:12:03.802 "uuid": "dab7e9e5-48b1-4811-80be-af6d411c53af", 00:12:03.802 "is_configured": true, 00:12:03.802 "data_offset": 2048, 00:12:03.802 "data_size": 63488 00:12:03.802 }, 00:12:03.802 { 00:12:03.802 "name": "BaseBdev2", 00:12:03.802 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:03.802 "is_configured": true, 00:12:03.802 "data_offset": 2048, 00:12:03.803 "data_size": 63488 00:12:03.803 }, 00:12:03.803 { 00:12:03.803 "name": "BaseBdev3", 00:12:03.803 "uuid": "35388644-af5e-4356-b565-974213e817ef", 00:12:03.803 "is_configured": true, 00:12:03.803 "data_offset": 2048, 00:12:03.803 "data_size": 63488 00:12:03.803 }, 00:12:03.803 { 00:12:03.803 "name": "BaseBdev4", 00:12:03.803 "uuid": "2c62a229-1fb7-4693-bd19-18a8e1c17a94", 00:12:03.803 "is_configured": true, 00:12:03.803 "data_offset": 2048, 00:12:03.803 "data_size": 63488 00:12:03.803 } 00:12:03.803 ] 00:12:03.803 } 00:12:03.803 } 00:12:03.803 }' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:03.803 BaseBdev2 00:12:03.803 BaseBdev3 00:12:03.803 BaseBdev4' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.803 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.061 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.061 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.061 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.062 [2024-12-12 05:50:11.429389] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.062 "name": "Existed_Raid", 00:12:04.062 "uuid": "08824a7d-442a-4198-8d4d-491d6b6011ec", 00:12:04.062 "strip_size_kb": 0, 00:12:04.062 "state": "online", 00:12:04.062 "raid_level": "raid1", 00:12:04.062 "superblock": true, 00:12:04.062 "num_base_bdevs": 4, 00:12:04.062 "num_base_bdevs_discovered": 3, 00:12:04.062 "num_base_bdevs_operational": 3, 00:12:04.062 "base_bdevs_list": [ 00:12:04.062 { 00:12:04.062 "name": null, 00:12:04.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.062 "is_configured": false, 00:12:04.062 "data_offset": 0, 00:12:04.062 "data_size": 63488 00:12:04.062 }, 00:12:04.062 { 00:12:04.062 "name": "BaseBdev2", 00:12:04.062 "uuid": "a1c60b05-aa4d-490d-b798-e014242926fb", 00:12:04.062 "is_configured": true, 00:12:04.062 "data_offset": 2048, 00:12:04.062 "data_size": 63488 00:12:04.062 }, 00:12:04.062 { 00:12:04.062 "name": "BaseBdev3", 00:12:04.062 "uuid": "35388644-af5e-4356-b565-974213e817ef", 00:12:04.062 "is_configured": true, 00:12:04.062 "data_offset": 2048, 00:12:04.062 "data_size": 63488 00:12:04.062 }, 00:12:04.062 { 00:12:04.062 "name": "BaseBdev4", 00:12:04.062 "uuid": "2c62a229-1fb7-4693-bd19-18a8e1c17a94", 00:12:04.062 "is_configured": true, 00:12:04.062 "data_offset": 2048, 00:12:04.062 "data_size": 63488 00:12:04.062 } 00:12:04.062 ] 00:12:04.062 }' 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.062 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.629 05:50:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.629 [2024-12-12 05:50:12.039343] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.629 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.630 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.630 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.888 [2024-12-12 05:50:12.192384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.888 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.888 [2024-12-12 05:50:12.345133] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:04.888 [2024-12-12 05:50:12.345271] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.147 [2024-12-12 05:50:12.437970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.148 [2024-12-12 05:50:12.438034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.148 [2024-12-12 05:50:12.438048] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 BaseBdev2 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 [ 00:12:05.148 { 00:12:05.148 "name": "BaseBdev2", 00:12:05.148 "aliases": [ 00:12:05.148 "bc9e740c-eb74-45af-bc08-7968a9b812ed" 00:12:05.148 ], 00:12:05.148 "product_name": "Malloc disk", 00:12:05.148 "block_size": 512, 00:12:05.148 "num_blocks": 65536, 00:12:05.148 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:05.148 "assigned_rate_limits": { 00:12:05.148 "rw_ios_per_sec": 0, 00:12:05.148 "rw_mbytes_per_sec": 0, 00:12:05.148 "r_mbytes_per_sec": 0, 00:12:05.148 "w_mbytes_per_sec": 0 00:12:05.148 }, 00:12:05.148 "claimed": false, 00:12:05.148 "zoned": false, 00:12:05.148 "supported_io_types": { 00:12:05.148 "read": true, 00:12:05.148 "write": true, 00:12:05.148 "unmap": true, 00:12:05.148 "flush": true, 00:12:05.148 "reset": true, 00:12:05.148 "nvme_admin": false, 00:12:05.148 "nvme_io": false, 00:12:05.148 "nvme_io_md": false, 00:12:05.148 "write_zeroes": true, 00:12:05.148 "zcopy": true, 00:12:05.148 "get_zone_info": false, 00:12:05.148 "zone_management": false, 00:12:05.148 "zone_append": false, 00:12:05.148 "compare": false, 00:12:05.148 "compare_and_write": false, 00:12:05.148 "abort": true, 00:12:05.148 "seek_hole": false, 00:12:05.148 "seek_data": false, 00:12:05.148 "copy": true, 00:12:05.148 "nvme_iov_md": false 00:12:05.148 }, 00:12:05.148 "memory_domains": [ 00:12:05.148 { 00:12:05.148 "dma_device_id": "system", 00:12:05.148 "dma_device_type": 1 00:12:05.148 }, 00:12:05.148 { 00:12:05.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.148 "dma_device_type": 2 00:12:05.148 } 00:12:05.148 ], 00:12:05.148 "driver_specific": {} 00:12:05.148 } 00:12:05.148 ] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 BaseBdev3 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 [ 00:12:05.148 { 00:12:05.148 "name": "BaseBdev3", 00:12:05.148 "aliases": [ 00:12:05.148 "8874caaf-2f8c-42da-95db-0af592caa4ed" 00:12:05.148 ], 00:12:05.148 "product_name": "Malloc disk", 00:12:05.148 "block_size": 512, 00:12:05.148 "num_blocks": 65536, 00:12:05.148 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:05.148 "assigned_rate_limits": { 00:12:05.148 "rw_ios_per_sec": 0, 00:12:05.148 "rw_mbytes_per_sec": 0, 00:12:05.148 "r_mbytes_per_sec": 0, 00:12:05.148 "w_mbytes_per_sec": 0 00:12:05.148 }, 00:12:05.148 "claimed": false, 00:12:05.148 "zoned": false, 00:12:05.148 "supported_io_types": { 00:12:05.148 "read": true, 00:12:05.148 "write": true, 00:12:05.148 "unmap": true, 00:12:05.148 "flush": true, 00:12:05.148 "reset": true, 00:12:05.148 "nvme_admin": false, 00:12:05.148 "nvme_io": false, 00:12:05.148 "nvme_io_md": false, 00:12:05.148 "write_zeroes": true, 00:12:05.148 "zcopy": true, 00:12:05.148 "get_zone_info": false, 00:12:05.148 "zone_management": false, 00:12:05.148 "zone_append": false, 00:12:05.148 "compare": false, 00:12:05.148 "compare_and_write": false, 00:12:05.148 "abort": true, 00:12:05.148 "seek_hole": false, 00:12:05.148 "seek_data": false, 00:12:05.148 "copy": true, 00:12:05.148 "nvme_iov_md": false 00:12:05.148 }, 00:12:05.148 "memory_domains": [ 00:12:05.148 { 00:12:05.148 "dma_device_id": "system", 00:12:05.148 "dma_device_type": 1 00:12:05.148 }, 00:12:05.148 { 00:12:05.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.148 "dma_device_type": 2 00:12:05.148 } 00:12:05.148 ], 00:12:05.148 "driver_specific": {} 00:12:05.148 } 00:12:05.148 ] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.148 BaseBdev4 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:05.148 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.407 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.408 [ 00:12:05.408 { 00:12:05.408 "name": "BaseBdev4", 00:12:05.408 "aliases": [ 00:12:05.408 "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd" 00:12:05.408 ], 00:12:05.408 "product_name": "Malloc disk", 00:12:05.408 "block_size": 512, 00:12:05.408 "num_blocks": 65536, 00:12:05.408 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:05.408 "assigned_rate_limits": { 00:12:05.408 "rw_ios_per_sec": 0, 00:12:05.408 "rw_mbytes_per_sec": 0, 00:12:05.408 "r_mbytes_per_sec": 0, 00:12:05.408 "w_mbytes_per_sec": 0 00:12:05.408 }, 00:12:05.408 "claimed": false, 00:12:05.408 "zoned": false, 00:12:05.408 "supported_io_types": { 00:12:05.408 "read": true, 00:12:05.408 "write": true, 00:12:05.408 "unmap": true, 00:12:05.408 "flush": true, 00:12:05.408 "reset": true, 00:12:05.408 "nvme_admin": false, 00:12:05.408 "nvme_io": false, 00:12:05.408 "nvme_io_md": false, 00:12:05.408 "write_zeroes": true, 00:12:05.408 "zcopy": true, 00:12:05.408 "get_zone_info": false, 00:12:05.408 "zone_management": false, 00:12:05.408 "zone_append": false, 00:12:05.408 "compare": false, 00:12:05.408 "compare_and_write": false, 00:12:05.408 "abort": true, 00:12:05.408 "seek_hole": false, 00:12:05.408 "seek_data": false, 00:12:05.408 "copy": true, 00:12:05.408 "nvme_iov_md": false 00:12:05.408 }, 00:12:05.408 "memory_domains": [ 00:12:05.408 { 00:12:05.408 "dma_device_id": "system", 00:12:05.408 "dma_device_type": 1 00:12:05.408 }, 00:12:05.408 { 00:12:05.408 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.408 "dma_device_type": 2 00:12:05.408 } 00:12:05.408 ], 00:12:05.408 "driver_specific": {} 00:12:05.408 } 00:12:05.408 ] 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.408 [2024-12-12 05:50:12.706206] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:05.408 [2024-12-12 05:50:12.706258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:05.408 [2024-12-12 05:50:12.706280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:05.408 [2024-12-12 05:50:12.708146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.408 [2024-12-12 05:50:12.708204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.408 "name": "Existed_Raid", 00:12:05.408 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:05.408 "strip_size_kb": 0, 00:12:05.408 "state": "configuring", 00:12:05.408 "raid_level": "raid1", 00:12:05.408 "superblock": true, 00:12:05.408 "num_base_bdevs": 4, 00:12:05.408 "num_base_bdevs_discovered": 3, 00:12:05.408 "num_base_bdevs_operational": 4, 00:12:05.408 "base_bdevs_list": [ 00:12:05.408 { 00:12:05.408 "name": "BaseBdev1", 00:12:05.408 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.408 "is_configured": false, 00:12:05.408 "data_offset": 0, 00:12:05.408 "data_size": 0 00:12:05.408 }, 00:12:05.408 { 00:12:05.408 "name": "BaseBdev2", 00:12:05.408 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:05.408 "is_configured": true, 00:12:05.408 "data_offset": 2048, 00:12:05.408 "data_size": 63488 00:12:05.408 }, 00:12:05.408 { 00:12:05.408 "name": "BaseBdev3", 00:12:05.408 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:05.408 "is_configured": true, 00:12:05.408 "data_offset": 2048, 00:12:05.408 "data_size": 63488 00:12:05.408 }, 00:12:05.408 { 00:12:05.408 "name": "BaseBdev4", 00:12:05.408 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:05.408 "is_configured": true, 00:12:05.408 "data_offset": 2048, 00:12:05.408 "data_size": 63488 00:12:05.408 } 00:12:05.408 ] 00:12:05.408 }' 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.408 05:50:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.666 [2024-12-12 05:50:13.161563] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.666 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.925 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.925 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.925 "name": "Existed_Raid", 00:12:05.925 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:05.925 "strip_size_kb": 0, 00:12:05.925 "state": "configuring", 00:12:05.925 "raid_level": "raid1", 00:12:05.925 "superblock": true, 00:12:05.925 "num_base_bdevs": 4, 00:12:05.925 "num_base_bdevs_discovered": 2, 00:12:05.925 "num_base_bdevs_operational": 4, 00:12:05.925 "base_bdevs_list": [ 00:12:05.925 { 00:12:05.925 "name": "BaseBdev1", 00:12:05.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.925 "is_configured": false, 00:12:05.925 "data_offset": 0, 00:12:05.925 "data_size": 0 00:12:05.925 }, 00:12:05.925 { 00:12:05.925 "name": null, 00:12:05.925 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:05.925 "is_configured": false, 00:12:05.925 "data_offset": 0, 00:12:05.925 "data_size": 63488 00:12:05.925 }, 00:12:05.925 { 00:12:05.925 "name": "BaseBdev3", 00:12:05.925 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:05.925 "is_configured": true, 00:12:05.925 "data_offset": 2048, 00:12:05.925 "data_size": 63488 00:12:05.925 }, 00:12:05.925 { 00:12:05.925 "name": "BaseBdev4", 00:12:05.925 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:05.925 "is_configured": true, 00:12:05.925 "data_offset": 2048, 00:12:05.925 "data_size": 63488 00:12:05.925 } 00:12:05.925 ] 00:12:05.925 }' 00:12:05.925 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.925 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.184 [2024-12-12 05:50:13.681145] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:06.184 BaseBdev1 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.184 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.444 [ 00:12:06.444 { 00:12:06.444 "name": "BaseBdev1", 00:12:06.444 "aliases": [ 00:12:06.444 "628f541c-527c-4e60-a02e-38deb89bc2e6" 00:12:06.444 ], 00:12:06.444 "product_name": "Malloc disk", 00:12:06.444 "block_size": 512, 00:12:06.444 "num_blocks": 65536, 00:12:06.444 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:06.444 "assigned_rate_limits": { 00:12:06.444 "rw_ios_per_sec": 0, 00:12:06.444 "rw_mbytes_per_sec": 0, 00:12:06.444 "r_mbytes_per_sec": 0, 00:12:06.444 "w_mbytes_per_sec": 0 00:12:06.444 }, 00:12:06.444 "claimed": true, 00:12:06.444 "claim_type": "exclusive_write", 00:12:06.444 "zoned": false, 00:12:06.444 "supported_io_types": { 00:12:06.444 "read": true, 00:12:06.444 "write": true, 00:12:06.444 "unmap": true, 00:12:06.444 "flush": true, 00:12:06.444 "reset": true, 00:12:06.444 "nvme_admin": false, 00:12:06.444 "nvme_io": false, 00:12:06.444 "nvme_io_md": false, 00:12:06.444 "write_zeroes": true, 00:12:06.444 "zcopy": true, 00:12:06.444 "get_zone_info": false, 00:12:06.444 "zone_management": false, 00:12:06.444 "zone_append": false, 00:12:06.444 "compare": false, 00:12:06.444 "compare_and_write": false, 00:12:06.444 "abort": true, 00:12:06.444 "seek_hole": false, 00:12:06.444 "seek_data": false, 00:12:06.444 "copy": true, 00:12:06.444 "nvme_iov_md": false 00:12:06.444 }, 00:12:06.444 "memory_domains": [ 00:12:06.444 { 00:12:06.444 "dma_device_id": "system", 00:12:06.444 "dma_device_type": 1 00:12:06.444 }, 00:12:06.444 { 00:12:06.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.444 "dma_device_type": 2 00:12:06.444 } 00:12:06.444 ], 00:12:06.444 "driver_specific": {} 00:12:06.444 } 00:12:06.444 ] 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.444 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.444 "name": "Existed_Raid", 00:12:06.444 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:06.444 "strip_size_kb": 0, 00:12:06.444 "state": "configuring", 00:12:06.444 "raid_level": "raid1", 00:12:06.444 "superblock": true, 00:12:06.444 "num_base_bdevs": 4, 00:12:06.445 "num_base_bdevs_discovered": 3, 00:12:06.445 "num_base_bdevs_operational": 4, 00:12:06.445 "base_bdevs_list": [ 00:12:06.445 { 00:12:06.445 "name": "BaseBdev1", 00:12:06.445 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:06.445 "is_configured": true, 00:12:06.445 "data_offset": 2048, 00:12:06.445 "data_size": 63488 00:12:06.445 }, 00:12:06.445 { 00:12:06.445 "name": null, 00:12:06.445 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:06.445 "is_configured": false, 00:12:06.445 "data_offset": 0, 00:12:06.445 "data_size": 63488 00:12:06.445 }, 00:12:06.445 { 00:12:06.445 "name": "BaseBdev3", 00:12:06.445 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:06.445 "is_configured": true, 00:12:06.445 "data_offset": 2048, 00:12:06.445 "data_size": 63488 00:12:06.445 }, 00:12:06.445 { 00:12:06.445 "name": "BaseBdev4", 00:12:06.445 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:06.445 "is_configured": true, 00:12:06.445 "data_offset": 2048, 00:12:06.445 "data_size": 63488 00:12:06.445 } 00:12:06.445 ] 00:12:06.445 }' 00:12:06.445 05:50:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.445 05:50:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 [2024-12-12 05:50:14.172411] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.704 "name": "Existed_Raid", 00:12:06.704 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:06.704 "strip_size_kb": 0, 00:12:06.704 "state": "configuring", 00:12:06.704 "raid_level": "raid1", 00:12:06.704 "superblock": true, 00:12:06.704 "num_base_bdevs": 4, 00:12:06.704 "num_base_bdevs_discovered": 2, 00:12:06.704 "num_base_bdevs_operational": 4, 00:12:06.704 "base_bdevs_list": [ 00:12:06.704 { 00:12:06.704 "name": "BaseBdev1", 00:12:06.704 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:06.704 "is_configured": true, 00:12:06.704 "data_offset": 2048, 00:12:06.704 "data_size": 63488 00:12:06.704 }, 00:12:06.704 { 00:12:06.704 "name": null, 00:12:06.704 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:06.704 "is_configured": false, 00:12:06.704 "data_offset": 0, 00:12:06.704 "data_size": 63488 00:12:06.704 }, 00:12:06.704 { 00:12:06.704 "name": null, 00:12:06.704 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:06.704 "is_configured": false, 00:12:06.704 "data_offset": 0, 00:12:06.704 "data_size": 63488 00:12:06.704 }, 00:12:06.704 { 00:12:06.704 "name": "BaseBdev4", 00:12:06.704 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:06.704 "is_configured": true, 00:12:06.704 "data_offset": 2048, 00:12:06.704 "data_size": 63488 00:12:06.704 } 00:12:06.704 ] 00:12:06.704 }' 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.704 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.273 [2024-12-12 05:50:14.675541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.273 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.274 "name": "Existed_Raid", 00:12:07.274 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:07.274 "strip_size_kb": 0, 00:12:07.274 "state": "configuring", 00:12:07.274 "raid_level": "raid1", 00:12:07.274 "superblock": true, 00:12:07.274 "num_base_bdevs": 4, 00:12:07.274 "num_base_bdevs_discovered": 3, 00:12:07.274 "num_base_bdevs_operational": 4, 00:12:07.274 "base_bdevs_list": [ 00:12:07.274 { 00:12:07.274 "name": "BaseBdev1", 00:12:07.274 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:07.274 "is_configured": true, 00:12:07.274 "data_offset": 2048, 00:12:07.274 "data_size": 63488 00:12:07.274 }, 00:12:07.274 { 00:12:07.274 "name": null, 00:12:07.274 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:07.274 "is_configured": false, 00:12:07.274 "data_offset": 0, 00:12:07.274 "data_size": 63488 00:12:07.274 }, 00:12:07.274 { 00:12:07.274 "name": "BaseBdev3", 00:12:07.274 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:07.274 "is_configured": true, 00:12:07.274 "data_offset": 2048, 00:12:07.274 "data_size": 63488 00:12:07.274 }, 00:12:07.274 { 00:12:07.274 "name": "BaseBdev4", 00:12:07.274 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:07.274 "is_configured": true, 00:12:07.274 "data_offset": 2048, 00:12:07.274 "data_size": 63488 00:12:07.274 } 00:12:07.274 ] 00:12:07.274 }' 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.274 05:50:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.842 [2024-12-12 05:50:15.146739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.842 "name": "Existed_Raid", 00:12:07.842 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:07.842 "strip_size_kb": 0, 00:12:07.842 "state": "configuring", 00:12:07.842 "raid_level": "raid1", 00:12:07.842 "superblock": true, 00:12:07.842 "num_base_bdevs": 4, 00:12:07.842 "num_base_bdevs_discovered": 2, 00:12:07.842 "num_base_bdevs_operational": 4, 00:12:07.842 "base_bdevs_list": [ 00:12:07.842 { 00:12:07.842 "name": null, 00:12:07.842 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:07.842 "is_configured": false, 00:12:07.842 "data_offset": 0, 00:12:07.842 "data_size": 63488 00:12:07.842 }, 00:12:07.842 { 00:12:07.842 "name": null, 00:12:07.842 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:07.842 "is_configured": false, 00:12:07.842 "data_offset": 0, 00:12:07.842 "data_size": 63488 00:12:07.842 }, 00:12:07.842 { 00:12:07.842 "name": "BaseBdev3", 00:12:07.842 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:07.842 "is_configured": true, 00:12:07.842 "data_offset": 2048, 00:12:07.842 "data_size": 63488 00:12:07.842 }, 00:12:07.842 { 00:12:07.842 "name": "BaseBdev4", 00:12:07.842 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:07.842 "is_configured": true, 00:12:07.842 "data_offset": 2048, 00:12:07.842 "data_size": 63488 00:12:07.842 } 00:12:07.842 ] 00:12:07.842 }' 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.842 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.409 [2024-12-12 05:50:15.706032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.409 "name": "Existed_Raid", 00:12:08.409 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:08.409 "strip_size_kb": 0, 00:12:08.409 "state": "configuring", 00:12:08.409 "raid_level": "raid1", 00:12:08.409 "superblock": true, 00:12:08.409 "num_base_bdevs": 4, 00:12:08.409 "num_base_bdevs_discovered": 3, 00:12:08.409 "num_base_bdevs_operational": 4, 00:12:08.409 "base_bdevs_list": [ 00:12:08.409 { 00:12:08.409 "name": null, 00:12:08.409 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:08.409 "is_configured": false, 00:12:08.409 "data_offset": 0, 00:12:08.409 "data_size": 63488 00:12:08.409 }, 00:12:08.409 { 00:12:08.409 "name": "BaseBdev2", 00:12:08.409 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:08.409 "is_configured": true, 00:12:08.409 "data_offset": 2048, 00:12:08.409 "data_size": 63488 00:12:08.409 }, 00:12:08.409 { 00:12:08.409 "name": "BaseBdev3", 00:12:08.409 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:08.409 "is_configured": true, 00:12:08.409 "data_offset": 2048, 00:12:08.409 "data_size": 63488 00:12:08.409 }, 00:12:08.409 { 00:12:08.409 "name": "BaseBdev4", 00:12:08.409 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:08.409 "is_configured": true, 00:12:08.409 "data_offset": 2048, 00:12:08.409 "data_size": 63488 00:12:08.409 } 00:12:08.409 ] 00:12:08.409 }' 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.409 05:50:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.668 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 628f541c-527c-4e60-a02e-38deb89bc2e6 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.928 [2024-12-12 05:50:16.233186] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:08.928 [2024-12-12 05:50:16.233455] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:08.928 [2024-12-12 05:50:16.233475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.928 [2024-12-12 05:50:16.233812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:08.928 NewBaseBdev 00:12:08.928 [2024-12-12 05:50:16.234025] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:08.928 [2024-12-12 05:50:16.234045] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:08.928 [2024-12-12 05:50:16.234231] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.928 [ 00:12:08.928 { 00:12:08.928 "name": "NewBaseBdev", 00:12:08.928 "aliases": [ 00:12:08.928 "628f541c-527c-4e60-a02e-38deb89bc2e6" 00:12:08.928 ], 00:12:08.928 "product_name": "Malloc disk", 00:12:08.928 "block_size": 512, 00:12:08.928 "num_blocks": 65536, 00:12:08.928 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:08.928 "assigned_rate_limits": { 00:12:08.928 "rw_ios_per_sec": 0, 00:12:08.928 "rw_mbytes_per_sec": 0, 00:12:08.928 "r_mbytes_per_sec": 0, 00:12:08.928 "w_mbytes_per_sec": 0 00:12:08.928 }, 00:12:08.928 "claimed": true, 00:12:08.928 "claim_type": "exclusive_write", 00:12:08.928 "zoned": false, 00:12:08.928 "supported_io_types": { 00:12:08.928 "read": true, 00:12:08.928 "write": true, 00:12:08.928 "unmap": true, 00:12:08.928 "flush": true, 00:12:08.928 "reset": true, 00:12:08.928 "nvme_admin": false, 00:12:08.928 "nvme_io": false, 00:12:08.928 "nvme_io_md": false, 00:12:08.928 "write_zeroes": true, 00:12:08.928 "zcopy": true, 00:12:08.928 "get_zone_info": false, 00:12:08.928 "zone_management": false, 00:12:08.928 "zone_append": false, 00:12:08.928 "compare": false, 00:12:08.928 "compare_and_write": false, 00:12:08.928 "abort": true, 00:12:08.928 "seek_hole": false, 00:12:08.928 "seek_data": false, 00:12:08.928 "copy": true, 00:12:08.928 "nvme_iov_md": false 00:12:08.928 }, 00:12:08.928 "memory_domains": [ 00:12:08.928 { 00:12:08.928 "dma_device_id": "system", 00:12:08.928 "dma_device_type": 1 00:12:08.928 }, 00:12:08.928 { 00:12:08.928 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:08.928 "dma_device_type": 2 00:12:08.928 } 00:12:08.928 ], 00:12:08.928 "driver_specific": {} 00:12:08.928 } 00:12:08.928 ] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.928 "name": "Existed_Raid", 00:12:08.928 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:08.928 "strip_size_kb": 0, 00:12:08.928 "state": "online", 00:12:08.928 "raid_level": "raid1", 00:12:08.928 "superblock": true, 00:12:08.928 "num_base_bdevs": 4, 00:12:08.928 "num_base_bdevs_discovered": 4, 00:12:08.928 "num_base_bdevs_operational": 4, 00:12:08.928 "base_bdevs_list": [ 00:12:08.928 { 00:12:08.928 "name": "NewBaseBdev", 00:12:08.928 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:08.928 "is_configured": true, 00:12:08.928 "data_offset": 2048, 00:12:08.928 "data_size": 63488 00:12:08.928 }, 00:12:08.928 { 00:12:08.928 "name": "BaseBdev2", 00:12:08.928 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:08.928 "is_configured": true, 00:12:08.928 "data_offset": 2048, 00:12:08.928 "data_size": 63488 00:12:08.928 }, 00:12:08.928 { 00:12:08.928 "name": "BaseBdev3", 00:12:08.928 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:08.928 "is_configured": true, 00:12:08.928 "data_offset": 2048, 00:12:08.928 "data_size": 63488 00:12:08.928 }, 00:12:08.928 { 00:12:08.928 "name": "BaseBdev4", 00:12:08.928 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:08.928 "is_configured": true, 00:12:08.928 "data_offset": 2048, 00:12:08.928 "data_size": 63488 00:12:08.928 } 00:12:08.928 ] 00:12:08.928 }' 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.928 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.187 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.187 [2024-12-12 05:50:16.688852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.446 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.446 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.446 "name": "Existed_Raid", 00:12:09.446 "aliases": [ 00:12:09.446 "ec2438eb-546d-46ce-8213-3891fb5fdc6f" 00:12:09.446 ], 00:12:09.446 "product_name": "Raid Volume", 00:12:09.446 "block_size": 512, 00:12:09.446 "num_blocks": 63488, 00:12:09.446 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:09.446 "assigned_rate_limits": { 00:12:09.446 "rw_ios_per_sec": 0, 00:12:09.446 "rw_mbytes_per_sec": 0, 00:12:09.446 "r_mbytes_per_sec": 0, 00:12:09.446 "w_mbytes_per_sec": 0 00:12:09.446 }, 00:12:09.446 "claimed": false, 00:12:09.446 "zoned": false, 00:12:09.446 "supported_io_types": { 00:12:09.446 "read": true, 00:12:09.446 "write": true, 00:12:09.446 "unmap": false, 00:12:09.446 "flush": false, 00:12:09.446 "reset": true, 00:12:09.446 "nvme_admin": false, 00:12:09.446 "nvme_io": false, 00:12:09.446 "nvme_io_md": false, 00:12:09.446 "write_zeroes": true, 00:12:09.446 "zcopy": false, 00:12:09.446 "get_zone_info": false, 00:12:09.446 "zone_management": false, 00:12:09.446 "zone_append": false, 00:12:09.446 "compare": false, 00:12:09.446 "compare_and_write": false, 00:12:09.446 "abort": false, 00:12:09.446 "seek_hole": false, 00:12:09.446 "seek_data": false, 00:12:09.446 "copy": false, 00:12:09.446 "nvme_iov_md": false 00:12:09.446 }, 00:12:09.446 "memory_domains": [ 00:12:09.446 { 00:12:09.446 "dma_device_id": "system", 00:12:09.446 "dma_device_type": 1 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.446 "dma_device_type": 2 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "system", 00:12:09.446 "dma_device_type": 1 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.446 "dma_device_type": 2 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "system", 00:12:09.446 "dma_device_type": 1 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.446 "dma_device_type": 2 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "system", 00:12:09.446 "dma_device_type": 1 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.446 "dma_device_type": 2 00:12:09.446 } 00:12:09.446 ], 00:12:09.446 "driver_specific": { 00:12:09.446 "raid": { 00:12:09.446 "uuid": "ec2438eb-546d-46ce-8213-3891fb5fdc6f", 00:12:09.446 "strip_size_kb": 0, 00:12:09.446 "state": "online", 00:12:09.446 "raid_level": "raid1", 00:12:09.446 "superblock": true, 00:12:09.446 "num_base_bdevs": 4, 00:12:09.446 "num_base_bdevs_discovered": 4, 00:12:09.446 "num_base_bdevs_operational": 4, 00:12:09.446 "base_bdevs_list": [ 00:12:09.446 { 00:12:09.446 "name": "NewBaseBdev", 00:12:09.446 "uuid": "628f541c-527c-4e60-a02e-38deb89bc2e6", 00:12:09.446 "is_configured": true, 00:12:09.446 "data_offset": 2048, 00:12:09.446 "data_size": 63488 00:12:09.446 }, 00:12:09.446 { 00:12:09.446 "name": "BaseBdev2", 00:12:09.446 "uuid": "bc9e740c-eb74-45af-bc08-7968a9b812ed", 00:12:09.446 "is_configured": true, 00:12:09.446 "data_offset": 2048, 00:12:09.446 "data_size": 63488 00:12:09.447 }, 00:12:09.447 { 00:12:09.447 "name": "BaseBdev3", 00:12:09.447 "uuid": "8874caaf-2f8c-42da-95db-0af592caa4ed", 00:12:09.447 "is_configured": true, 00:12:09.447 "data_offset": 2048, 00:12:09.447 "data_size": 63488 00:12:09.447 }, 00:12:09.447 { 00:12:09.447 "name": "BaseBdev4", 00:12:09.447 "uuid": "0aa0fc09-68b7-4f86-88ba-e6b36d5529dd", 00:12:09.447 "is_configured": true, 00:12:09.447 "data_offset": 2048, 00:12:09.447 "data_size": 63488 00:12:09.447 } 00:12:09.447 ] 00:12:09.447 } 00:12:09.447 } 00:12:09.447 }' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:09.447 BaseBdev2 00:12:09.447 BaseBdev3 00:12:09.447 BaseBdev4' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.447 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.706 [2024-12-12 05:50:16.971938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.706 [2024-12-12 05:50:16.972014] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:09.706 [2024-12-12 05:50:16.972136] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.706 [2024-12-12 05:50:16.972467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.706 [2024-12-12 05:50:16.972563] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74757 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 74757 ']' 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 74757 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:09.706 05:50:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74757 00:12:09.706 05:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:09.706 05:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:09.706 05:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74757' 00:12:09.706 killing process with pid 74757 00:12:09.706 05:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 74757 00:12:09.706 [2024-12-12 05:50:17.011662] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.706 05:50:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 74757 00:12:09.965 [2024-12-12 05:50:17.400290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.369 05:50:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:11.369 00:12:11.369 real 0m11.292s 00:12:11.369 user 0m17.800s 00:12:11.369 sys 0m2.119s 00:12:11.369 05:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:11.369 ************************************ 00:12:11.369 END TEST raid_state_function_test_sb 00:12:11.369 ************************************ 00:12:11.369 05:50:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.369 05:50:18 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:11.369 05:50:18 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:11.369 05:50:18 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:11.369 05:50:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.369 ************************************ 00:12:11.369 START TEST raid_superblock_test 00:12:11.369 ************************************ 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=75422 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 75422 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 75422 ']' 00:12:11.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:11.369 05:50:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.369 [2024-12-12 05:50:18.676727] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:11.369 [2024-12-12 05:50:18.676934] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75422 ] 00:12:11.369 [2024-12-12 05:50:18.849223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.628 [2024-12-12 05:50:18.962531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:11.628 [2024-12-12 05:50:19.137893] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:11.628 [2024-12-12 05:50:19.137935] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:12.195 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 malloc1 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 [2024-12-12 05:50:19.547801] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:12.196 [2024-12-12 05:50:19.547925] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.196 [2024-12-12 05:50:19.548009] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:12.196 [2024-12-12 05:50:19.548055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.196 [2024-12-12 05:50:19.550227] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.196 [2024-12-12 05:50:19.550307] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:12.196 pt1 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 malloc2 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 [2024-12-12 05:50:19.605374] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:12.196 [2024-12-12 05:50:19.605489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.196 [2024-12-12 05:50:19.605552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:12.196 [2024-12-12 05:50:19.605594] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.196 [2024-12-12 05:50:19.607762] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.196 [2024-12-12 05:50:19.607841] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:12.196 pt2 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 malloc3 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.196 [2024-12-12 05:50:19.673606] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:12.196 [2024-12-12 05:50:19.673714] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.196 [2024-12-12 05:50:19.673758] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:12.196 [2024-12-12 05:50:19.673794] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.196 [2024-12-12 05:50:19.675948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.196 [2024-12-12 05:50:19.676045] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:12.196 pt3 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.196 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.455 malloc4 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.455 [2024-12-12 05:50:19.732857] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:12.455 [2024-12-12 05:50:19.732973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.455 [2024-12-12 05:50:19.733018] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:12.455 [2024-12-12 05:50:19.733080] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.455 [2024-12-12 05:50:19.735197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.455 [2024-12-12 05:50:19.735279] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:12.455 pt4 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.455 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.455 [2024-12-12 05:50:19.744870] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:12.455 [2024-12-12 05:50:19.746765] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:12.455 [2024-12-12 05:50:19.746881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:12.455 [2024-12-12 05:50:19.746990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:12.456 [2024-12-12 05:50:19.747268] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:12.456 [2024-12-12 05:50:19.747328] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:12.456 [2024-12-12 05:50:19.747648] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:12.456 [2024-12-12 05:50:19.747889] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:12.456 [2024-12-12 05:50:19.747948] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:12.456 [2024-12-12 05:50:19.748207] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.456 "name": "raid_bdev1", 00:12:12.456 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:12.456 "strip_size_kb": 0, 00:12:12.456 "state": "online", 00:12:12.456 "raid_level": "raid1", 00:12:12.456 "superblock": true, 00:12:12.456 "num_base_bdevs": 4, 00:12:12.456 "num_base_bdevs_discovered": 4, 00:12:12.456 "num_base_bdevs_operational": 4, 00:12:12.456 "base_bdevs_list": [ 00:12:12.456 { 00:12:12.456 "name": "pt1", 00:12:12.456 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:12.456 "is_configured": true, 00:12:12.456 "data_offset": 2048, 00:12:12.456 "data_size": 63488 00:12:12.456 }, 00:12:12.456 { 00:12:12.456 "name": "pt2", 00:12:12.456 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:12.456 "is_configured": true, 00:12:12.456 "data_offset": 2048, 00:12:12.456 "data_size": 63488 00:12:12.456 }, 00:12:12.456 { 00:12:12.456 "name": "pt3", 00:12:12.456 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:12.456 "is_configured": true, 00:12:12.456 "data_offset": 2048, 00:12:12.456 "data_size": 63488 00:12:12.456 }, 00:12:12.456 { 00:12:12.456 "name": "pt4", 00:12:12.456 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:12.456 "is_configured": true, 00:12:12.456 "data_offset": 2048, 00:12:12.456 "data_size": 63488 00:12:12.456 } 00:12:12.456 ] 00:12:12.456 }' 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.456 05:50:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.023 [2024-12-12 05:50:20.244349] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.023 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:13.023 "name": "raid_bdev1", 00:12:13.023 "aliases": [ 00:12:13.023 "9d78334b-8645-4818-9edd-65d1749ec91a" 00:12:13.023 ], 00:12:13.023 "product_name": "Raid Volume", 00:12:13.023 "block_size": 512, 00:12:13.023 "num_blocks": 63488, 00:12:13.023 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:13.023 "assigned_rate_limits": { 00:12:13.023 "rw_ios_per_sec": 0, 00:12:13.023 "rw_mbytes_per_sec": 0, 00:12:13.023 "r_mbytes_per_sec": 0, 00:12:13.023 "w_mbytes_per_sec": 0 00:12:13.023 }, 00:12:13.023 "claimed": false, 00:12:13.023 "zoned": false, 00:12:13.023 "supported_io_types": { 00:12:13.023 "read": true, 00:12:13.023 "write": true, 00:12:13.023 "unmap": false, 00:12:13.023 "flush": false, 00:12:13.023 "reset": true, 00:12:13.023 "nvme_admin": false, 00:12:13.023 "nvme_io": false, 00:12:13.023 "nvme_io_md": false, 00:12:13.023 "write_zeroes": true, 00:12:13.023 "zcopy": false, 00:12:13.023 "get_zone_info": false, 00:12:13.023 "zone_management": false, 00:12:13.023 "zone_append": false, 00:12:13.023 "compare": false, 00:12:13.023 "compare_and_write": false, 00:12:13.023 "abort": false, 00:12:13.023 "seek_hole": false, 00:12:13.023 "seek_data": false, 00:12:13.023 "copy": false, 00:12:13.023 "nvme_iov_md": false 00:12:13.023 }, 00:12:13.023 "memory_domains": [ 00:12:13.023 { 00:12:13.023 "dma_device_id": "system", 00:12:13.023 "dma_device_type": 1 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.023 "dma_device_type": 2 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "system", 00:12:13.023 "dma_device_type": 1 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.023 "dma_device_type": 2 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "system", 00:12:13.023 "dma_device_type": 1 00:12:13.023 }, 00:12:13.023 { 00:12:13.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.024 "dma_device_type": 2 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "dma_device_id": "system", 00:12:13.024 "dma_device_type": 1 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:13.024 "dma_device_type": 2 00:12:13.024 } 00:12:13.024 ], 00:12:13.024 "driver_specific": { 00:12:13.024 "raid": { 00:12:13.024 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:13.024 "strip_size_kb": 0, 00:12:13.024 "state": "online", 00:12:13.024 "raid_level": "raid1", 00:12:13.024 "superblock": true, 00:12:13.024 "num_base_bdevs": 4, 00:12:13.024 "num_base_bdevs_discovered": 4, 00:12:13.024 "num_base_bdevs_operational": 4, 00:12:13.024 "base_bdevs_list": [ 00:12:13.024 { 00:12:13.024 "name": "pt1", 00:12:13.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.024 "is_configured": true, 00:12:13.024 "data_offset": 2048, 00:12:13.024 "data_size": 63488 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "name": "pt2", 00:12:13.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.024 "is_configured": true, 00:12:13.024 "data_offset": 2048, 00:12:13.024 "data_size": 63488 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "name": "pt3", 00:12:13.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.024 "is_configured": true, 00:12:13.024 "data_offset": 2048, 00:12:13.024 "data_size": 63488 00:12:13.024 }, 00:12:13.024 { 00:12:13.024 "name": "pt4", 00:12:13.024 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:13.024 "is_configured": true, 00:12:13.024 "data_offset": 2048, 00:12:13.024 "data_size": 63488 00:12:13.024 } 00:12:13.024 ] 00:12:13.024 } 00:12:13.024 } 00:12:13.024 }' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:13.024 pt2 00:12:13.024 pt3 00:12:13.024 pt4' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:13.024 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 [2024-12-12 05:50:20.551791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9d78334b-8645-4818-9edd-65d1749ec91a 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9d78334b-8645-4818-9edd-65d1749ec91a ']' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 [2024-12-12 05:50:20.583444] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.284 [2024-12-12 05:50:20.583532] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:13.284 [2024-12-12 05:50:20.583658] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.284 [2024-12-12 05:50:20.583800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:13.284 [2024-12-12 05:50:20.583863] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.284 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.284 [2024-12-12 05:50:20.743182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:13.284 [2024-12-12 05:50:20.745077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:13.284 [2024-12-12 05:50:20.745178] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:13.284 [2024-12-12 05:50:20.745258] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:13.284 [2024-12-12 05:50:20.745370] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:13.284 [2024-12-12 05:50:20.745491] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:13.284 [2024-12-12 05:50:20.745585] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:13.284 [2024-12-12 05:50:20.745670] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:13.285 [2024-12-12 05:50:20.745746] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:13.285 [2024-12-12 05:50:20.745790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:13.285 request: 00:12:13.285 { 00:12:13.285 "name": "raid_bdev1", 00:12:13.285 "raid_level": "raid1", 00:12:13.285 "base_bdevs": [ 00:12:13.285 "malloc1", 00:12:13.285 "malloc2", 00:12:13.285 "malloc3", 00:12:13.285 "malloc4" 00:12:13.285 ], 00:12:13.285 "superblock": false, 00:12:13.285 "method": "bdev_raid_create", 00:12:13.285 "req_id": 1 00:12:13.285 } 00:12:13.285 Got JSON-RPC error response 00:12:13.285 response: 00:12:13.285 { 00:12:13.285 "code": -17, 00:12:13.285 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:13.285 } 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.285 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 [2024-12-12 05:50:20.811079] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:13.545 [2024-12-12 05:50:20.811225] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.545 [2024-12-12 05:50:20.811271] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:13.545 [2024-12-12 05:50:20.811326] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.545 [2024-12-12 05:50:20.813651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.545 [2024-12-12 05:50:20.813741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:13.545 [2024-12-12 05:50:20.813861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:13.545 [2024-12-12 05:50:20.813967] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:13.545 pt1 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.545 "name": "raid_bdev1", 00:12:13.545 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:13.545 "strip_size_kb": 0, 00:12:13.545 "state": "configuring", 00:12:13.545 "raid_level": "raid1", 00:12:13.545 "superblock": true, 00:12:13.545 "num_base_bdevs": 4, 00:12:13.545 "num_base_bdevs_discovered": 1, 00:12:13.545 "num_base_bdevs_operational": 4, 00:12:13.545 "base_bdevs_list": [ 00:12:13.545 { 00:12:13.545 "name": "pt1", 00:12:13.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:13.545 "is_configured": true, 00:12:13.545 "data_offset": 2048, 00:12:13.545 "data_size": 63488 00:12:13.545 }, 00:12:13.545 { 00:12:13.545 "name": null, 00:12:13.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:13.545 "is_configured": false, 00:12:13.545 "data_offset": 2048, 00:12:13.545 "data_size": 63488 00:12:13.545 }, 00:12:13.545 { 00:12:13.545 "name": null, 00:12:13.545 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:13.545 "is_configured": false, 00:12:13.545 "data_offset": 2048, 00:12:13.545 "data_size": 63488 00:12:13.545 }, 00:12:13.545 { 00:12:13.545 "name": null, 00:12:13.545 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:13.545 "is_configured": false, 00:12:13.545 "data_offset": 2048, 00:12:13.545 "data_size": 63488 00:12:13.545 } 00:12:13.545 ] 00:12:13.545 }' 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.545 05:50:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 [2024-12-12 05:50:21.310399] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.806 [2024-12-12 05:50:21.310571] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.806 [2024-12-12 05:50:21.310632] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:13.806 [2024-12-12 05:50:21.310681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.806 [2024-12-12 05:50:21.311212] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.806 [2024-12-12 05:50:21.311285] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.806 [2024-12-12 05:50:21.311438] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:13.806 [2024-12-12 05:50:21.311524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:13.806 pt2 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.806 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.806 [2024-12-12 05:50:21.322371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.065 "name": "raid_bdev1", 00:12:14.065 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:14.065 "strip_size_kb": 0, 00:12:14.065 "state": "configuring", 00:12:14.065 "raid_level": "raid1", 00:12:14.065 "superblock": true, 00:12:14.065 "num_base_bdevs": 4, 00:12:14.065 "num_base_bdevs_discovered": 1, 00:12:14.065 "num_base_bdevs_operational": 4, 00:12:14.065 "base_bdevs_list": [ 00:12:14.065 { 00:12:14.065 "name": "pt1", 00:12:14.065 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.065 "is_configured": true, 00:12:14.065 "data_offset": 2048, 00:12:14.065 "data_size": 63488 00:12:14.065 }, 00:12:14.065 { 00:12:14.065 "name": null, 00:12:14.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.065 "is_configured": false, 00:12:14.065 "data_offset": 0, 00:12:14.065 "data_size": 63488 00:12:14.065 }, 00:12:14.065 { 00:12:14.065 "name": null, 00:12:14.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.065 "is_configured": false, 00:12:14.065 "data_offset": 2048, 00:12:14.065 "data_size": 63488 00:12:14.065 }, 00:12:14.065 { 00:12:14.065 "name": null, 00:12:14.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.065 "is_configured": false, 00:12:14.065 "data_offset": 2048, 00:12:14.065 "data_size": 63488 00:12:14.065 } 00:12:14.065 ] 00:12:14.065 }' 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.065 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 [2024-12-12 05:50:21.777590] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:14.325 [2024-12-12 05:50:21.777706] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.325 [2024-12-12 05:50:21.777749] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:14.325 [2024-12-12 05:50:21.777782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.325 [2024-12-12 05:50:21.778293] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.325 [2024-12-12 05:50:21.778386] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:14.325 [2024-12-12 05:50:21.778544] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:14.325 [2024-12-12 05:50:21.778607] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.325 pt2 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 [2024-12-12 05:50:21.789551] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.325 [2024-12-12 05:50:21.789643] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.325 [2024-12-12 05:50:21.789683] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:14.325 [2024-12-12 05:50:21.789714] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.325 [2024-12-12 05:50:21.790131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.325 [2024-12-12 05:50:21.790194] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.325 [2024-12-12 05:50:21.790302] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:14.325 [2024-12-12 05:50:21.790392] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.325 pt3 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 [2024-12-12 05:50:21.801487] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:14.325 [2024-12-12 05:50:21.801600] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.325 [2024-12-12 05:50:21.801623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:14.325 [2024-12-12 05:50:21.801633] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.325 [2024-12-12 05:50:21.802029] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.325 [2024-12-12 05:50:21.802047] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:14.325 [2024-12-12 05:50:21.802109] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:14.325 [2024-12-12 05:50:21.802136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:14.325 [2024-12-12 05:50:21.802279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:14.325 [2024-12-12 05:50:21.802288] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.325 [2024-12-12 05:50:21.802574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:14.325 [2024-12-12 05:50:21.802745] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:14.325 [2024-12-12 05:50:21.802826] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:14.325 [2024-12-12 05:50:21.802987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.325 pt4 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.325 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.584 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.584 "name": "raid_bdev1", 00:12:14.584 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:14.584 "strip_size_kb": 0, 00:12:14.584 "state": "online", 00:12:14.584 "raid_level": "raid1", 00:12:14.584 "superblock": true, 00:12:14.584 "num_base_bdevs": 4, 00:12:14.584 "num_base_bdevs_discovered": 4, 00:12:14.584 "num_base_bdevs_operational": 4, 00:12:14.584 "base_bdevs_list": [ 00:12:14.584 { 00:12:14.584 "name": "pt1", 00:12:14.584 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.584 "is_configured": true, 00:12:14.584 "data_offset": 2048, 00:12:14.584 "data_size": 63488 00:12:14.584 }, 00:12:14.584 { 00:12:14.584 "name": "pt2", 00:12:14.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.584 "is_configured": true, 00:12:14.584 "data_offset": 2048, 00:12:14.584 "data_size": 63488 00:12:14.584 }, 00:12:14.584 { 00:12:14.584 "name": "pt3", 00:12:14.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.584 "is_configured": true, 00:12:14.584 "data_offset": 2048, 00:12:14.584 "data_size": 63488 00:12:14.584 }, 00:12:14.584 { 00:12:14.584 "name": "pt4", 00:12:14.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.584 "is_configured": true, 00:12:14.584 "data_offset": 2048, 00:12:14.584 "data_size": 63488 00:12:14.584 } 00:12:14.584 ] 00:12:14.584 }' 00:12:14.584 05:50:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.584 05:50:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.843 [2024-12-12 05:50:22.253119] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:14.843 "name": "raid_bdev1", 00:12:14.843 "aliases": [ 00:12:14.843 "9d78334b-8645-4818-9edd-65d1749ec91a" 00:12:14.843 ], 00:12:14.843 "product_name": "Raid Volume", 00:12:14.843 "block_size": 512, 00:12:14.843 "num_blocks": 63488, 00:12:14.843 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:14.843 "assigned_rate_limits": { 00:12:14.843 "rw_ios_per_sec": 0, 00:12:14.843 "rw_mbytes_per_sec": 0, 00:12:14.843 "r_mbytes_per_sec": 0, 00:12:14.843 "w_mbytes_per_sec": 0 00:12:14.843 }, 00:12:14.843 "claimed": false, 00:12:14.843 "zoned": false, 00:12:14.843 "supported_io_types": { 00:12:14.843 "read": true, 00:12:14.843 "write": true, 00:12:14.843 "unmap": false, 00:12:14.843 "flush": false, 00:12:14.843 "reset": true, 00:12:14.843 "nvme_admin": false, 00:12:14.843 "nvme_io": false, 00:12:14.843 "nvme_io_md": false, 00:12:14.843 "write_zeroes": true, 00:12:14.843 "zcopy": false, 00:12:14.843 "get_zone_info": false, 00:12:14.843 "zone_management": false, 00:12:14.843 "zone_append": false, 00:12:14.843 "compare": false, 00:12:14.843 "compare_and_write": false, 00:12:14.843 "abort": false, 00:12:14.843 "seek_hole": false, 00:12:14.843 "seek_data": false, 00:12:14.843 "copy": false, 00:12:14.843 "nvme_iov_md": false 00:12:14.843 }, 00:12:14.843 "memory_domains": [ 00:12:14.843 { 00:12:14.843 "dma_device_id": "system", 00:12:14.843 "dma_device_type": 1 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.843 "dma_device_type": 2 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "system", 00:12:14.843 "dma_device_type": 1 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.843 "dma_device_type": 2 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "system", 00:12:14.843 "dma_device_type": 1 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.843 "dma_device_type": 2 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "system", 00:12:14.843 "dma_device_type": 1 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.843 "dma_device_type": 2 00:12:14.843 } 00:12:14.843 ], 00:12:14.843 "driver_specific": { 00:12:14.843 "raid": { 00:12:14.843 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:14.843 "strip_size_kb": 0, 00:12:14.843 "state": "online", 00:12:14.843 "raid_level": "raid1", 00:12:14.843 "superblock": true, 00:12:14.843 "num_base_bdevs": 4, 00:12:14.843 "num_base_bdevs_discovered": 4, 00:12:14.843 "num_base_bdevs_operational": 4, 00:12:14.843 "base_bdevs_list": [ 00:12:14.843 { 00:12:14.843 "name": "pt1", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt2", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt3", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 }, 00:12:14.843 { 00:12:14.843 "name": "pt4", 00:12:14.843 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:14.843 "is_configured": true, 00:12:14.843 "data_offset": 2048, 00:12:14.843 "data_size": 63488 00:12:14.843 } 00:12:14.843 ] 00:12:14.843 } 00:12:14.843 } 00:12:14.843 }' 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:14.843 pt2 00:12:14.843 pt3 00:12:14.843 pt4' 00:12:14.843 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.102 [2024-12-12 05:50:22.516661] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.102 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9d78334b-8645-4818-9edd-65d1749ec91a '!=' 9d78334b-8645-4818-9edd-65d1749ec91a ']' 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.103 [2024-12-12 05:50:22.564336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.103 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.103 "name": "raid_bdev1", 00:12:15.103 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:15.103 "strip_size_kb": 0, 00:12:15.103 "state": "online", 00:12:15.103 "raid_level": "raid1", 00:12:15.103 "superblock": true, 00:12:15.103 "num_base_bdevs": 4, 00:12:15.103 "num_base_bdevs_discovered": 3, 00:12:15.103 "num_base_bdevs_operational": 3, 00:12:15.103 "base_bdevs_list": [ 00:12:15.103 { 00:12:15.103 "name": null, 00:12:15.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.103 "is_configured": false, 00:12:15.103 "data_offset": 0, 00:12:15.103 "data_size": 63488 00:12:15.103 }, 00:12:15.103 { 00:12:15.103 "name": "pt2", 00:12:15.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.103 "is_configured": true, 00:12:15.103 "data_offset": 2048, 00:12:15.103 "data_size": 63488 00:12:15.103 }, 00:12:15.103 { 00:12:15.103 "name": "pt3", 00:12:15.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.103 "is_configured": true, 00:12:15.103 "data_offset": 2048, 00:12:15.103 "data_size": 63488 00:12:15.103 }, 00:12:15.103 { 00:12:15.103 "name": "pt4", 00:12:15.103 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.103 "is_configured": true, 00:12:15.103 "data_offset": 2048, 00:12:15.103 "data_size": 63488 00:12:15.103 } 00:12:15.103 ] 00:12:15.103 }' 00:12:15.361 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.362 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 [2024-12-12 05:50:22.987615] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:15.621 [2024-12-12 05:50:22.987708] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:15.621 [2024-12-12 05:50:22.987833] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:15.621 [2024-12-12 05:50:22.987955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:15.621 [2024-12-12 05:50:22.988023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 [2024-12-12 05:50:23.071446] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:15.621 [2024-12-12 05:50:23.071560] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:15.621 [2024-12-12 05:50:23.071602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:15.621 [2024-12-12 05:50:23.071656] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:15.621 [2024-12-12 05:50:23.073861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:15.621 [2024-12-12 05:50:23.073941] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:15.621 [2024-12-12 05:50:23.074054] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:15.621 [2024-12-12 05:50:23.074143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:15.621 pt2 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.621 "name": "raid_bdev1", 00:12:15.621 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:15.621 "strip_size_kb": 0, 00:12:15.621 "state": "configuring", 00:12:15.621 "raid_level": "raid1", 00:12:15.621 "superblock": true, 00:12:15.621 "num_base_bdevs": 4, 00:12:15.621 "num_base_bdevs_discovered": 1, 00:12:15.621 "num_base_bdevs_operational": 3, 00:12:15.621 "base_bdevs_list": [ 00:12:15.621 { 00:12:15.621 "name": null, 00:12:15.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:15.621 "is_configured": false, 00:12:15.621 "data_offset": 2048, 00:12:15.621 "data_size": 63488 00:12:15.621 }, 00:12:15.621 { 00:12:15.621 "name": "pt2", 00:12:15.621 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.621 "is_configured": true, 00:12:15.621 "data_offset": 2048, 00:12:15.621 "data_size": 63488 00:12:15.621 }, 00:12:15.621 { 00:12:15.621 "name": null, 00:12:15.621 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.621 "is_configured": false, 00:12:15.621 "data_offset": 2048, 00:12:15.621 "data_size": 63488 00:12:15.621 }, 00:12:15.621 { 00:12:15.621 "name": null, 00:12:15.621 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:15.621 "is_configured": false, 00:12:15.621 "data_offset": 2048, 00:12:15.621 "data_size": 63488 00:12:15.621 } 00:12:15.621 ] 00:12:15.621 }' 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.621 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.188 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:16.188 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:16.188 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:16.188 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.188 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.188 [2024-12-12 05:50:23.506753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:16.188 [2024-12-12 05:50:23.506883] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.188 [2024-12-12 05:50:23.506930] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:16.188 [2024-12-12 05:50:23.506978] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.188 [2024-12-12 05:50:23.507536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.189 [2024-12-12 05:50:23.507608] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:16.189 [2024-12-12 05:50:23.507768] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:16.189 [2024-12-12 05:50:23.507830] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:16.189 pt3 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.189 "name": "raid_bdev1", 00:12:16.189 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:16.189 "strip_size_kb": 0, 00:12:16.189 "state": "configuring", 00:12:16.189 "raid_level": "raid1", 00:12:16.189 "superblock": true, 00:12:16.189 "num_base_bdevs": 4, 00:12:16.189 "num_base_bdevs_discovered": 2, 00:12:16.189 "num_base_bdevs_operational": 3, 00:12:16.189 "base_bdevs_list": [ 00:12:16.189 { 00:12:16.189 "name": null, 00:12:16.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.189 "is_configured": false, 00:12:16.189 "data_offset": 2048, 00:12:16.189 "data_size": 63488 00:12:16.189 }, 00:12:16.189 { 00:12:16.189 "name": "pt2", 00:12:16.189 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.189 "is_configured": true, 00:12:16.189 "data_offset": 2048, 00:12:16.189 "data_size": 63488 00:12:16.189 }, 00:12:16.189 { 00:12:16.189 "name": "pt3", 00:12:16.189 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.189 "is_configured": true, 00:12:16.189 "data_offset": 2048, 00:12:16.189 "data_size": 63488 00:12:16.189 }, 00:12:16.189 { 00:12:16.189 "name": null, 00:12:16.189 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.189 "is_configured": false, 00:12:16.189 "data_offset": 2048, 00:12:16.189 "data_size": 63488 00:12:16.189 } 00:12:16.189 ] 00:12:16.189 }' 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.189 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.448 [2024-12-12 05:50:23.962140] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:16.448 [2024-12-12 05:50:23.962282] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.448 [2024-12-12 05:50:23.962366] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:16.448 [2024-12-12 05:50:23.962462] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:16.448 [2024-12-12 05:50:23.963005] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:16.448 [2024-12-12 05:50:23.963073] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:16.448 [2024-12-12 05:50:23.963222] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:16.448 [2024-12-12 05:50:23.963283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:16.448 [2024-12-12 05:50:23.963512] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:16.448 [2024-12-12 05:50:23.963569] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:16.448 [2024-12-12 05:50:23.963891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:16.448 [2024-12-12 05:50:23.964128] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:16.448 [2024-12-12 05:50:23.964185] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:16.448 [2024-12-12 05:50:23.964435] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:16.448 pt4 00:12:16.448 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:16.707 05:50:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.707 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.707 "name": "raid_bdev1", 00:12:16.707 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:16.707 "strip_size_kb": 0, 00:12:16.707 "state": "online", 00:12:16.707 "raid_level": "raid1", 00:12:16.707 "superblock": true, 00:12:16.707 "num_base_bdevs": 4, 00:12:16.707 "num_base_bdevs_discovered": 3, 00:12:16.707 "num_base_bdevs_operational": 3, 00:12:16.707 "base_bdevs_list": [ 00:12:16.707 { 00:12:16.707 "name": null, 00:12:16.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.707 "is_configured": false, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "name": "pt2", 00:12:16.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.707 "is_configured": true, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "name": "pt3", 00:12:16.707 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.707 "is_configured": true, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 }, 00:12:16.707 { 00:12:16.707 "name": "pt4", 00:12:16.707 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:16.707 "is_configured": true, 00:12:16.707 "data_offset": 2048, 00:12:16.707 "data_size": 63488 00:12:16.707 } 00:12:16.707 ] 00:12:16.707 }' 00:12:16.707 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.707 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 [2024-12-12 05:50:24.413327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:16.966 [2024-12-12 05:50:24.413408] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.966 [2024-12-12 05:50:24.413525] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.966 [2024-12-12 05:50:24.413685] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:16.966 [2024-12-12 05:50:24.413744] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.966 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.966 [2024-12-12 05:50:24.485187] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:16.966 [2024-12-12 05:50:24.485317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:16.966 [2024-12-12 05:50:24.485364] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:16.966 [2024-12-12 05:50:24.485409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.224 [2024-12-12 05:50:24.487719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.224 [2024-12-12 05:50:24.487823] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:17.224 [2024-12-12 05:50:24.487965] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:17.224 [2024-12-12 05:50:24.488071] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:17.224 [2024-12-12 05:50:24.488297] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:17.224 [2024-12-12 05:50:24.488370] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.224 [2024-12-12 05:50:24.488422] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:17.224 [2024-12-12 05:50:24.488567] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:17.224 [2024-12-12 05:50:24.488722] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:17.224 pt1 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.224 "name": "raid_bdev1", 00:12:17.224 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:17.224 "strip_size_kb": 0, 00:12:17.224 "state": "configuring", 00:12:17.224 "raid_level": "raid1", 00:12:17.224 "superblock": true, 00:12:17.224 "num_base_bdevs": 4, 00:12:17.224 "num_base_bdevs_discovered": 2, 00:12:17.224 "num_base_bdevs_operational": 3, 00:12:17.224 "base_bdevs_list": [ 00:12:17.224 { 00:12:17.224 "name": null, 00:12:17.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.224 "is_configured": false, 00:12:17.224 "data_offset": 2048, 00:12:17.224 "data_size": 63488 00:12:17.224 }, 00:12:17.224 { 00:12:17.224 "name": "pt2", 00:12:17.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.224 "is_configured": true, 00:12:17.224 "data_offset": 2048, 00:12:17.224 "data_size": 63488 00:12:17.224 }, 00:12:17.224 { 00:12:17.224 "name": "pt3", 00:12:17.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.224 "is_configured": true, 00:12:17.224 "data_offset": 2048, 00:12:17.224 "data_size": 63488 00:12:17.224 }, 00:12:17.224 { 00:12:17.224 "name": null, 00:12:17.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.224 "is_configured": false, 00:12:17.224 "data_offset": 2048, 00:12:17.224 "data_size": 63488 00:12:17.224 } 00:12:17.224 ] 00:12:17.224 }' 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.224 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.483 [2024-12-12 05:50:24.980362] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:17.483 [2024-12-12 05:50:24.980481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.483 [2024-12-12 05:50:24.980557] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:17.483 [2024-12-12 05:50:24.980600] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.483 [2024-12-12 05:50:24.981088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.483 [2024-12-12 05:50:24.981165] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:17.483 [2024-12-12 05:50:24.981295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:17.483 [2024-12-12 05:50:24.981362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:17.483 [2024-12-12 05:50:24.981566] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:17.483 [2024-12-12 05:50:24.981622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:17.483 [2024-12-12 05:50:24.981936] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:17.483 [2024-12-12 05:50:24.982160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:17.483 [2024-12-12 05:50:24.982230] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:17.483 [2024-12-12 05:50:24.982487] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.483 pt4 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.483 05:50:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.745 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.745 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.745 "name": "raid_bdev1", 00:12:17.745 "uuid": "9d78334b-8645-4818-9edd-65d1749ec91a", 00:12:17.745 "strip_size_kb": 0, 00:12:17.745 "state": "online", 00:12:17.745 "raid_level": "raid1", 00:12:17.745 "superblock": true, 00:12:17.745 "num_base_bdevs": 4, 00:12:17.745 "num_base_bdevs_discovered": 3, 00:12:17.745 "num_base_bdevs_operational": 3, 00:12:17.745 "base_bdevs_list": [ 00:12:17.745 { 00:12:17.745 "name": null, 00:12:17.745 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:17.745 "is_configured": false, 00:12:17.745 "data_offset": 2048, 00:12:17.745 "data_size": 63488 00:12:17.745 }, 00:12:17.745 { 00:12:17.745 "name": "pt2", 00:12:17.745 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:17.745 "is_configured": true, 00:12:17.745 "data_offset": 2048, 00:12:17.745 "data_size": 63488 00:12:17.745 }, 00:12:17.745 { 00:12:17.745 "name": "pt3", 00:12:17.745 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:17.745 "is_configured": true, 00:12:17.745 "data_offset": 2048, 00:12:17.745 "data_size": 63488 00:12:17.745 }, 00:12:17.745 { 00:12:17.745 "name": "pt4", 00:12:17.745 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:17.745 "is_configured": true, 00:12:17.745 "data_offset": 2048, 00:12:17.745 "data_size": 63488 00:12:17.745 } 00:12:17.745 ] 00:12:17.745 }' 00:12:17.745 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.745 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.004 [2024-12-12 05:50:25.475843] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 9d78334b-8645-4818-9edd-65d1749ec91a '!=' 9d78334b-8645-4818-9edd-65d1749ec91a ']' 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 75422 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 75422 ']' 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 75422 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.004 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75422 00:12:18.262 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.262 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.262 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75422' 00:12:18.262 killing process with pid 75422 00:12:18.262 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 75422 00:12:18.262 [2024-12-12 05:50:25.538289] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.262 [2024-12-12 05:50:25.538457] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.262 05:50:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 75422 00:12:18.262 [2024-12-12 05:50:25.538590] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.262 [2024-12-12 05:50:25.538608] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:18.520 [2024-12-12 05:50:25.928355] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:19.911 05:50:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:19.911 ************************************ 00:12:19.911 END TEST raid_superblock_test 00:12:19.911 ************************************ 00:12:19.911 00:12:19.911 real 0m8.457s 00:12:19.911 user 0m13.261s 00:12:19.911 sys 0m1.548s 00:12:19.911 05:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.911 05:50:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.911 05:50:27 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:19.911 05:50:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:19.911 05:50:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.911 05:50:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:19.911 ************************************ 00:12:19.911 START TEST raid_read_error_test 00:12:19.911 ************************************ 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.busUkxCFjB 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75909 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75909 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75909 ']' 00:12:19.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:19.911 05:50:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.911 [2024-12-12 05:50:27.228982] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:19.911 [2024-12-12 05:50:27.229114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75909 ] 00:12:19.911 [2024-12-12 05:50:27.406197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.169 [2024-12-12 05:50:27.518426] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.428 [2024-12-12 05:50:27.717629] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.428 [2024-12-12 05:50:27.717684] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 BaseBdev1_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 true 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 [2024-12-12 05:50:28.101133] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:20.687 [2024-12-12 05:50:28.101241] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.687 [2024-12-12 05:50:28.101284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:20.687 [2024-12-12 05:50:28.101297] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.687 [2024-12-12 05:50:28.103442] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.687 [2024-12-12 05:50:28.103493] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:20.687 BaseBdev1 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 BaseBdev2_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 true 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.687 [2024-12-12 05:50:28.166722] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:20.687 [2024-12-12 05:50:28.166825] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.687 [2024-12-12 05:50:28.166847] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:20.687 [2024-12-12 05:50:28.166860] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.687 [2024-12-12 05:50:28.168905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.687 [2024-12-12 05:50:28.168949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:20.687 BaseBdev2 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.687 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 BaseBdev3_malloc 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 true 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 [2024-12-12 05:50:28.244454] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:20.947 [2024-12-12 05:50:28.244529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.947 [2024-12-12 05:50:28.244550] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:20.947 [2024-12-12 05:50:28.244562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.947 [2024-12-12 05:50:28.246830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.947 [2024-12-12 05:50:28.246874] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:20.947 BaseBdev3 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 BaseBdev4_malloc 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 true 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 [2024-12-12 05:50:28.311567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:20.947 [2024-12-12 05:50:28.311671] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.947 [2024-12-12 05:50:28.311694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:20.947 [2024-12-12 05:50:28.311707] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.947 [2024-12-12 05:50:28.313836] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.947 [2024-12-12 05:50:28.313881] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:20.947 BaseBdev4 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 [2024-12-12 05:50:28.323599] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:20.947 [2024-12-12 05:50:28.325407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:20.947 [2024-12-12 05:50:28.325490] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:20.947 [2024-12-12 05:50:28.325570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:20.947 [2024-12-12 05:50:28.325804] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:20.947 [2024-12-12 05:50:28.325824] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:20.947 [2024-12-12 05:50:28.326067] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:20.947 [2024-12-12 05:50:28.326262] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:20.947 [2024-12-12 05:50:28.326272] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:20.947 [2024-12-12 05:50:28.326458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:20.947 "name": "raid_bdev1", 00:12:20.947 "uuid": "07a76c3d-5ae8-484f-8d1d-058f8ed04d83", 00:12:20.947 "strip_size_kb": 0, 00:12:20.947 "state": "online", 00:12:20.947 "raid_level": "raid1", 00:12:20.947 "superblock": true, 00:12:20.947 "num_base_bdevs": 4, 00:12:20.947 "num_base_bdevs_discovered": 4, 00:12:20.947 "num_base_bdevs_operational": 4, 00:12:20.947 "base_bdevs_list": [ 00:12:20.947 { 00:12:20.947 "name": "BaseBdev1", 00:12:20.947 "uuid": "dc5d9e1e-3639-5939-97b3-ea25b8b3bc75", 00:12:20.947 "is_configured": true, 00:12:20.947 "data_offset": 2048, 00:12:20.947 "data_size": 63488 00:12:20.947 }, 00:12:20.947 { 00:12:20.947 "name": "BaseBdev2", 00:12:20.947 "uuid": "b900f48f-454b-5c9f-a042-deaec54f3ef9", 00:12:20.947 "is_configured": true, 00:12:20.947 "data_offset": 2048, 00:12:20.947 "data_size": 63488 00:12:20.947 }, 00:12:20.947 { 00:12:20.947 "name": "BaseBdev3", 00:12:20.947 "uuid": "9195c62b-8a4d-5a2a-99fa-41f80cdd2ff0", 00:12:20.947 "is_configured": true, 00:12:20.947 "data_offset": 2048, 00:12:20.947 "data_size": 63488 00:12:20.947 }, 00:12:20.947 { 00:12:20.947 "name": "BaseBdev4", 00:12:20.947 "uuid": "7b70877f-08d8-55eb-a083-c2337593b19a", 00:12:20.947 "is_configured": true, 00:12:20.947 "data_offset": 2048, 00:12:20.947 "data_size": 63488 00:12:20.947 } 00:12:20.947 ] 00:12:20.947 }' 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:20.947 05:50:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.516 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:21.516 05:50:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:21.516 [2024-12-12 05:50:28.860042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.451 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.451 "name": "raid_bdev1", 00:12:22.451 "uuid": "07a76c3d-5ae8-484f-8d1d-058f8ed04d83", 00:12:22.451 "strip_size_kb": 0, 00:12:22.451 "state": "online", 00:12:22.451 "raid_level": "raid1", 00:12:22.451 "superblock": true, 00:12:22.451 "num_base_bdevs": 4, 00:12:22.451 "num_base_bdevs_discovered": 4, 00:12:22.452 "num_base_bdevs_operational": 4, 00:12:22.452 "base_bdevs_list": [ 00:12:22.452 { 00:12:22.452 "name": "BaseBdev1", 00:12:22.452 "uuid": "dc5d9e1e-3639-5939-97b3-ea25b8b3bc75", 00:12:22.452 "is_configured": true, 00:12:22.452 "data_offset": 2048, 00:12:22.452 "data_size": 63488 00:12:22.452 }, 00:12:22.452 { 00:12:22.452 "name": "BaseBdev2", 00:12:22.452 "uuid": "b900f48f-454b-5c9f-a042-deaec54f3ef9", 00:12:22.452 "is_configured": true, 00:12:22.452 "data_offset": 2048, 00:12:22.452 "data_size": 63488 00:12:22.452 }, 00:12:22.452 { 00:12:22.452 "name": "BaseBdev3", 00:12:22.452 "uuid": "9195c62b-8a4d-5a2a-99fa-41f80cdd2ff0", 00:12:22.452 "is_configured": true, 00:12:22.452 "data_offset": 2048, 00:12:22.452 "data_size": 63488 00:12:22.452 }, 00:12:22.452 { 00:12:22.452 "name": "BaseBdev4", 00:12:22.452 "uuid": "7b70877f-08d8-55eb-a083-c2337593b19a", 00:12:22.452 "is_configured": true, 00:12:22.452 "data_offset": 2048, 00:12:22.452 "data_size": 63488 00:12:22.452 } 00:12:22.452 ] 00:12:22.452 }' 00:12:22.452 05:50:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.452 05:50:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.019 [2024-12-12 05:50:30.245137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.019 [2024-12-12 05:50:30.245174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.019 [2024-12-12 05:50:30.247873] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.019 [2024-12-12 05:50:30.248009] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:23.019 [2024-12-12 05:50:30.248140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.019 [2024-12-12 05:50:30.248155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:23.019 { 00:12:23.019 "results": [ 00:12:23.019 { 00:12:23.019 "job": "raid_bdev1", 00:12:23.019 "core_mask": "0x1", 00:12:23.019 "workload": "randrw", 00:12:23.019 "percentage": 50, 00:12:23.019 "status": "finished", 00:12:23.019 "queue_depth": 1, 00:12:23.019 "io_size": 131072, 00:12:23.019 "runtime": 1.386026, 00:12:23.019 "iops": 10729.235959498596, 00:12:23.019 "mibps": 1341.1544949373244, 00:12:23.019 "io_failed": 0, 00:12:23.019 "io_timeout": 0, 00:12:23.019 "avg_latency_us": 90.27734258436236, 00:12:23.019 "min_latency_us": 24.034934497816593, 00:12:23.019 "max_latency_us": 1373.6803493449781 00:12:23.019 } 00:12:23.019 ], 00:12:23.019 "core_count": 1 00:12:23.019 } 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75909 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75909 ']' 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75909 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75909 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:23.019 killing process with pid 75909 00:12:23.019 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:23.020 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75909' 00:12:23.020 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75909 00:12:23.020 [2024-12-12 05:50:30.279256] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:23.020 05:50:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75909 00:12:23.278 [2024-12-12 05:50:30.584401] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.busUkxCFjB 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:24.212 ************************************ 00:12:24.212 END TEST raid_read_error_test 00:12:24.212 ************************************ 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:24.212 00:12:24.212 real 0m4.606s 00:12:24.212 user 0m5.404s 00:12:24.212 sys 0m0.588s 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:24.212 05:50:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.472 05:50:31 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:24.472 05:50:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:24.472 05:50:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:24.472 05:50:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:24.472 ************************************ 00:12:24.472 START TEST raid_write_error_test 00:12:24.472 ************************************ 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tmXjd7K6He 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=76055 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 76055 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 76055 ']' 00:12:24.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:24.472 05:50:31 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.472 [2024-12-12 05:50:31.898244] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:24.472 [2024-12-12 05:50:31.898385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76055 ] 00:12:24.731 [2024-12-12 05:50:32.070935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.731 [2024-12-12 05:50:32.179770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:24.990 [2024-12-12 05:50:32.369019] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:24.990 [2024-12-12 05:50:32.369082] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.249 BaseBdev1_malloc 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.249 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 true 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 [2024-12-12 05:50:32.777267] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:25.509 [2024-12-12 05:50:32.777331] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.509 [2024-12-12 05:50:32.777354] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:25.509 [2024-12-12 05:50:32.777367] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.509 [2024-12-12 05:50:32.779444] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.509 [2024-12-12 05:50:32.779547] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:25.509 BaseBdev1 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 BaseBdev2_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 true 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 [2024-12-12 05:50:32.841208] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:25.509 [2024-12-12 05:50:32.841268] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.509 [2024-12-12 05:50:32.841289] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:25.509 [2024-12-12 05:50:32.841302] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.509 [2024-12-12 05:50:32.843483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.509 [2024-12-12 05:50:32.843543] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:25.509 BaseBdev2 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 BaseBdev3_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 true 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.509 [2024-12-12 05:50:32.921022] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:25.509 [2024-12-12 05:50:32.921082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.509 [2024-12-12 05:50:32.921104] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:25.509 [2024-12-12 05:50:32.921116] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.509 [2024-12-12 05:50:32.923174] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.509 [2024-12-12 05:50:32.923280] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:25.509 BaseBdev3 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:25.509 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.510 BaseBdev4_malloc 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.510 true 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.510 [2024-12-12 05:50:32.990575] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:25.510 [2024-12-12 05:50:32.990635] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.510 [2024-12-12 05:50:32.990657] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:25.510 [2024-12-12 05:50:32.990669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.510 [2024-12-12 05:50:32.992786] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.510 [2024-12-12 05:50:32.992833] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:25.510 BaseBdev4 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.510 05:50:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.510 [2024-12-12 05:50:33.002611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:25.510 [2024-12-12 05:50:33.004418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:25.510 [2024-12-12 05:50:33.004503] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:25.510 [2024-12-12 05:50:33.004588] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.510 [2024-12-12 05:50:33.004879] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:25.510 [2024-12-12 05:50:33.004908] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:25.510 [2024-12-12 05:50:33.005159] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:25.510 [2024-12-12 05:50:33.005349] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:25.510 [2024-12-12 05:50:33.005360] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:25.510 [2024-12-12 05:50:33.005549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.510 05:50:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 05:50:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.769 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.769 "name": "raid_bdev1", 00:12:25.769 "uuid": "6963a019-405f-4267-9c44-b1eaca97d2de", 00:12:25.769 "strip_size_kb": 0, 00:12:25.769 "state": "online", 00:12:25.769 "raid_level": "raid1", 00:12:25.769 "superblock": true, 00:12:25.769 "num_base_bdevs": 4, 00:12:25.769 "num_base_bdevs_discovered": 4, 00:12:25.769 "num_base_bdevs_operational": 4, 00:12:25.769 "base_bdevs_list": [ 00:12:25.769 { 00:12:25.769 "name": "BaseBdev1", 00:12:25.769 "uuid": "0585ad02-2017-5ce5-9a48-87d3a25d1e23", 00:12:25.769 "is_configured": true, 00:12:25.769 "data_offset": 2048, 00:12:25.769 "data_size": 63488 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "BaseBdev2", 00:12:25.769 "uuid": "7121e754-53bf-5f03-88e7-c4a0a18271ff", 00:12:25.769 "is_configured": true, 00:12:25.769 "data_offset": 2048, 00:12:25.769 "data_size": 63488 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "BaseBdev3", 00:12:25.769 "uuid": "9b42d6df-bb4f-5c6f-867e-01a7302fed29", 00:12:25.769 "is_configured": true, 00:12:25.769 "data_offset": 2048, 00:12:25.769 "data_size": 63488 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "BaseBdev4", 00:12:25.769 "uuid": "4c475065-39e4-580b-9ad1-43cf616c92a8", 00:12:25.769 "is_configured": true, 00:12:25.769 "data_offset": 2048, 00:12:25.769 "data_size": 63488 00:12:25.769 } 00:12:25.769 ] 00:12:25.769 }' 00:12:25.769 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.769 05:50:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:26.028 05:50:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:26.028 [2024-12-12 05:50:33.491212] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.963 [2024-12-12 05:50:34.413751] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:26.963 [2024-12-12 05:50:34.413821] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.963 [2024-12-12 05:50:34.414061] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.963 "name": "raid_bdev1", 00:12:26.963 "uuid": "6963a019-405f-4267-9c44-b1eaca97d2de", 00:12:26.963 "strip_size_kb": 0, 00:12:26.963 "state": "online", 00:12:26.963 "raid_level": "raid1", 00:12:26.963 "superblock": true, 00:12:26.963 "num_base_bdevs": 4, 00:12:26.963 "num_base_bdevs_discovered": 3, 00:12:26.963 "num_base_bdevs_operational": 3, 00:12:26.963 "base_bdevs_list": [ 00:12:26.963 { 00:12:26.963 "name": null, 00:12:26.963 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.963 "is_configured": false, 00:12:26.963 "data_offset": 0, 00:12:26.963 "data_size": 63488 00:12:26.963 }, 00:12:26.963 { 00:12:26.963 "name": "BaseBdev2", 00:12:26.963 "uuid": "7121e754-53bf-5f03-88e7-c4a0a18271ff", 00:12:26.963 "is_configured": true, 00:12:26.963 "data_offset": 2048, 00:12:26.963 "data_size": 63488 00:12:26.963 }, 00:12:26.963 { 00:12:26.963 "name": "BaseBdev3", 00:12:26.963 "uuid": "9b42d6df-bb4f-5c6f-867e-01a7302fed29", 00:12:26.963 "is_configured": true, 00:12:26.963 "data_offset": 2048, 00:12:26.963 "data_size": 63488 00:12:26.963 }, 00:12:26.963 { 00:12:26.963 "name": "BaseBdev4", 00:12:26.963 "uuid": "4c475065-39e4-580b-9ad1-43cf616c92a8", 00:12:26.963 "is_configured": true, 00:12:26.963 "data_offset": 2048, 00:12:26.963 "data_size": 63488 00:12:26.963 } 00:12:26.963 ] 00:12:26.963 }' 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.963 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.530 [2024-12-12 05:50:34.857842] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.530 [2024-12-12 05:50:34.857879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.530 [2024-12-12 05:50:34.860739] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.530 [2024-12-12 05:50:34.860809] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.530 [2024-12-12 05:50:34.860919] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.530 [2024-12-12 05:50:34.860932] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:27.530 { 00:12:27.530 "results": [ 00:12:27.530 { 00:12:27.530 "job": "raid_bdev1", 00:12:27.530 "core_mask": "0x1", 00:12:27.530 "workload": "randrw", 00:12:27.530 "percentage": 50, 00:12:27.530 "status": "finished", 00:12:27.530 "queue_depth": 1, 00:12:27.530 "io_size": 131072, 00:12:27.530 "runtime": 1.367513, 00:12:27.530 "iops": 11241.575034387242, 00:12:27.530 "mibps": 1405.1968792984053, 00:12:27.530 "io_failed": 0, 00:12:27.530 "io_timeout": 0, 00:12:27.530 "avg_latency_us": 85.98065422363315, 00:12:27.530 "min_latency_us": 24.258515283842794, 00:12:27.530 "max_latency_us": 1438.071615720524 00:12:27.530 } 00:12:27.530 ], 00:12:27.530 "core_count": 1 00:12:27.530 } 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 76055 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 76055 ']' 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 76055 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76055 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76055' 00:12:27.530 killing process with pid 76055 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 76055 00:12:27.530 [2024-12-12 05:50:34.904227] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:27.530 05:50:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 76055 00:12:27.788 [2024-12-12 05:50:35.228517] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tmXjd7K6He 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:29.175 00:12:29.175 real 0m4.583s 00:12:29.175 user 0m5.348s 00:12:29.175 sys 0m0.578s 00:12:29.175 ************************************ 00:12:29.175 END TEST raid_write_error_test 00:12:29.175 ************************************ 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:29.175 05:50:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.175 05:50:36 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:12:29.175 05:50:36 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:12:29.175 05:50:36 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:12:29.175 05:50:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:29.175 05:50:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:29.175 05:50:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:29.175 ************************************ 00:12:29.175 START TEST raid_rebuild_test 00:12:29.175 ************************************ 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:29.175 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=76199 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 76199 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 76199 ']' 00:12:29.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:29.176 05:50:36 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.176 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:29.176 Zero copy mechanism will not be used. 00:12:29.176 [2024-12-12 05:50:36.541862] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:29.176 [2024-12-12 05:50:36.541974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76199 ] 00:12:29.434 [2024-12-12 05:50:36.712307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.435 [2024-12-12 05:50:36.820012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.693 [2024-12-12 05:50:37.006274] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.693 [2024-12-12 05:50:37.006437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.952 BaseBdev1_malloc 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.952 [2024-12-12 05:50:37.409959] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:29.952 [2024-12-12 05:50:37.410049] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.952 [2024-12-12 05:50:37.410073] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:29.952 [2024-12-12 05:50:37.410087] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.952 [2024-12-12 05:50:37.412219] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.952 [2024-12-12 05:50:37.412339] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:29.952 BaseBdev1 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.952 BaseBdev2_malloc 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.952 [2024-12-12 05:50:37.465069] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:29.952 [2024-12-12 05:50:37.465155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.952 [2024-12-12 05:50:37.465189] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:29.952 [2024-12-12 05:50:37.465205] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.952 [2024-12-12 05:50:37.467353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.952 [2024-12-12 05:50:37.467461] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:29.952 BaseBdev2 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.952 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 spare_malloc 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 spare_delay 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 [2024-12-12 05:50:37.541275] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:30.212 [2024-12-12 05:50:37.541343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:30.212 [2024-12-12 05:50:37.541365] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:30.212 [2024-12-12 05:50:37.541378] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:30.212 [2024-12-12 05:50:37.543527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:30.212 [2024-12-12 05:50:37.543569] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:30.212 spare 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 [2024-12-12 05:50:37.553315] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.212 [2024-12-12 05:50:37.555103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.212 [2024-12-12 05:50:37.555213] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:30.212 [2024-12-12 05:50:37.555229] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:30.212 [2024-12-12 05:50:37.555495] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:30.212 [2024-12-12 05:50:37.555680] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:30.212 [2024-12-12 05:50:37.555692] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:30.212 [2024-12-12 05:50:37.555870] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.212 "name": "raid_bdev1", 00:12:30.212 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:30.212 "strip_size_kb": 0, 00:12:30.212 "state": "online", 00:12:30.212 "raid_level": "raid1", 00:12:30.212 "superblock": false, 00:12:30.212 "num_base_bdevs": 2, 00:12:30.212 "num_base_bdevs_discovered": 2, 00:12:30.212 "num_base_bdevs_operational": 2, 00:12:30.212 "base_bdevs_list": [ 00:12:30.212 { 00:12:30.212 "name": "BaseBdev1", 00:12:30.212 "uuid": "70c3d168-a270-5525-9524-b7b5ae0e1c54", 00:12:30.212 "is_configured": true, 00:12:30.212 "data_offset": 0, 00:12:30.212 "data_size": 65536 00:12:30.212 }, 00:12:30.212 { 00:12:30.212 "name": "BaseBdev2", 00:12:30.212 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:30.212 "is_configured": true, 00:12:30.212 "data_offset": 0, 00:12:30.212 "data_size": 65536 00:12:30.212 } 00:12:30.212 ] 00:12:30.212 }' 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.212 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.470 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:30.470 05:50:37 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:30.470 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.470 05:50:37 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.470 [2024-12-12 05:50:37.988828] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.729 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:30.987 [2024-12-12 05:50:38.260148] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:30.987 /dev/nbd0 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:30.987 1+0 records in 00:12:30.987 1+0 records out 00:12:30.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000406062 s, 10.1 MB/s 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:30.987 05:50:38 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:12:35.180 65536+0 records in 00:12:35.180 65536+0 records out 00:12:35.180 33554432 bytes (34 MB, 32 MiB) copied, 4.22567 s, 7.9 MB/s 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:35.180 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:35.439 [2024-12-12 05:50:42.765548] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 [2024-12-12 05:50:42.781617] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.439 "name": "raid_bdev1", 00:12:35.439 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:35.439 "strip_size_kb": 0, 00:12:35.439 "state": "online", 00:12:35.439 "raid_level": "raid1", 00:12:35.439 "superblock": false, 00:12:35.439 "num_base_bdevs": 2, 00:12:35.439 "num_base_bdevs_discovered": 1, 00:12:35.439 "num_base_bdevs_operational": 1, 00:12:35.439 "base_bdevs_list": [ 00:12:35.439 { 00:12:35.439 "name": null, 00:12:35.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.439 "is_configured": false, 00:12:35.439 "data_offset": 0, 00:12:35.439 "data_size": 65536 00:12:35.439 }, 00:12:35.439 { 00:12:35.439 "name": "BaseBdev2", 00:12:35.439 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:35.439 "is_configured": true, 00:12:35.439 "data_offset": 0, 00:12:35.439 "data_size": 65536 00:12:35.439 } 00:12:35.439 ] 00:12:35.439 }' 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.439 05:50:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.006 05:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:36.006 05:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.007 05:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.007 [2024-12-12 05:50:43.244851] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:36.007 [2024-12-12 05:50:43.261448] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:12:36.007 05:50:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.007 05:50:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:36.007 [2024-12-12 05:50:43.263421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:36.942 "name": "raid_bdev1", 00:12:36.942 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:36.942 "strip_size_kb": 0, 00:12:36.942 "state": "online", 00:12:36.942 "raid_level": "raid1", 00:12:36.942 "superblock": false, 00:12:36.942 "num_base_bdevs": 2, 00:12:36.942 "num_base_bdevs_discovered": 2, 00:12:36.942 "num_base_bdevs_operational": 2, 00:12:36.942 "process": { 00:12:36.942 "type": "rebuild", 00:12:36.942 "target": "spare", 00:12:36.942 "progress": { 00:12:36.942 "blocks": 20480, 00:12:36.942 "percent": 31 00:12:36.942 } 00:12:36.942 }, 00:12:36.942 "base_bdevs_list": [ 00:12:36.942 { 00:12:36.942 "name": "spare", 00:12:36.942 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:36.942 "is_configured": true, 00:12:36.942 "data_offset": 0, 00:12:36.942 "data_size": 65536 00:12:36.942 }, 00:12:36.942 { 00:12:36.942 "name": "BaseBdev2", 00:12:36.942 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:36.942 "is_configured": true, 00:12:36.942 "data_offset": 0, 00:12:36.942 "data_size": 65536 00:12:36.942 } 00:12:36.942 ] 00:12:36.942 }' 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.942 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.942 [2024-12-12 05:50:44.414956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.201 [2024-12-12 05:50:44.468677] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:37.201 [2024-12-12 05:50:44.468834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.201 [2024-12-12 05:50:44.468853] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:37.201 [2024-12-12 05:50:44.468865] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.201 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.202 "name": "raid_bdev1", 00:12:37.202 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:37.202 "strip_size_kb": 0, 00:12:37.202 "state": "online", 00:12:37.202 "raid_level": "raid1", 00:12:37.202 "superblock": false, 00:12:37.202 "num_base_bdevs": 2, 00:12:37.202 "num_base_bdevs_discovered": 1, 00:12:37.202 "num_base_bdevs_operational": 1, 00:12:37.202 "base_bdevs_list": [ 00:12:37.202 { 00:12:37.202 "name": null, 00:12:37.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.202 "is_configured": false, 00:12:37.202 "data_offset": 0, 00:12:37.202 "data_size": 65536 00:12:37.202 }, 00:12:37.202 { 00:12:37.202 "name": "BaseBdev2", 00:12:37.202 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:37.202 "is_configured": true, 00:12:37.202 "data_offset": 0, 00:12:37.202 "data_size": 65536 00:12:37.202 } 00:12:37.202 ] 00:12:37.202 }' 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.202 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.461 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:37.461 "name": "raid_bdev1", 00:12:37.461 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:37.461 "strip_size_kb": 0, 00:12:37.461 "state": "online", 00:12:37.461 "raid_level": "raid1", 00:12:37.461 "superblock": false, 00:12:37.461 "num_base_bdevs": 2, 00:12:37.461 "num_base_bdevs_discovered": 1, 00:12:37.461 "num_base_bdevs_operational": 1, 00:12:37.461 "base_bdevs_list": [ 00:12:37.461 { 00:12:37.461 "name": null, 00:12:37.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.461 "is_configured": false, 00:12:37.461 "data_offset": 0, 00:12:37.461 "data_size": 65536 00:12:37.461 }, 00:12:37.461 { 00:12:37.461 "name": "BaseBdev2", 00:12:37.461 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:37.461 "is_configured": true, 00:12:37.461 "data_offset": 0, 00:12:37.461 "data_size": 65536 00:12:37.461 } 00:12:37.461 ] 00:12:37.461 }' 00:12:37.719 05:50:44 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.719 [2024-12-12 05:50:45.079011] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:37.719 [2024-12-12 05:50:45.094779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.719 05:50:45 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:37.719 [2024-12-12 05:50:45.096619] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.659 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.659 "name": "raid_bdev1", 00:12:38.659 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:38.659 "strip_size_kb": 0, 00:12:38.659 "state": "online", 00:12:38.659 "raid_level": "raid1", 00:12:38.659 "superblock": false, 00:12:38.659 "num_base_bdevs": 2, 00:12:38.659 "num_base_bdevs_discovered": 2, 00:12:38.659 "num_base_bdevs_operational": 2, 00:12:38.659 "process": { 00:12:38.659 "type": "rebuild", 00:12:38.659 "target": "spare", 00:12:38.659 "progress": { 00:12:38.659 "blocks": 20480, 00:12:38.659 "percent": 31 00:12:38.659 } 00:12:38.660 }, 00:12:38.660 "base_bdevs_list": [ 00:12:38.660 { 00:12:38.660 "name": "spare", 00:12:38.660 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:38.660 "is_configured": true, 00:12:38.660 "data_offset": 0, 00:12:38.660 "data_size": 65536 00:12:38.660 }, 00:12:38.660 { 00:12:38.660 "name": "BaseBdev2", 00:12:38.660 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:38.660 "is_configured": true, 00:12:38.660 "data_offset": 0, 00:12:38.660 "data_size": 65536 00:12:38.660 } 00:12:38.660 ] 00:12:38.660 }' 00:12:38.660 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=360 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.927 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:38.927 "name": "raid_bdev1", 00:12:38.927 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:38.927 "strip_size_kb": 0, 00:12:38.927 "state": "online", 00:12:38.928 "raid_level": "raid1", 00:12:38.928 "superblock": false, 00:12:38.928 "num_base_bdevs": 2, 00:12:38.928 "num_base_bdevs_discovered": 2, 00:12:38.928 "num_base_bdevs_operational": 2, 00:12:38.928 "process": { 00:12:38.928 "type": "rebuild", 00:12:38.928 "target": "spare", 00:12:38.928 "progress": { 00:12:38.928 "blocks": 22528, 00:12:38.928 "percent": 34 00:12:38.928 } 00:12:38.928 }, 00:12:38.928 "base_bdevs_list": [ 00:12:38.928 { 00:12:38.928 "name": "spare", 00:12:38.928 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:38.928 "is_configured": true, 00:12:38.928 "data_offset": 0, 00:12:38.928 "data_size": 65536 00:12:38.928 }, 00:12:38.928 { 00:12:38.928 "name": "BaseBdev2", 00:12:38.928 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:38.928 "is_configured": true, 00:12:38.928 "data_offset": 0, 00:12:38.928 "data_size": 65536 00:12:38.928 } 00:12:38.928 ] 00:12:38.928 }' 00:12:38.928 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:38.928 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:38.928 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:38.928 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:38.928 05:50:46 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:39.867 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:39.867 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:39.867 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:39.867 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.868 05:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.127 05:50:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.127 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:40.127 "name": "raid_bdev1", 00:12:40.127 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:40.127 "strip_size_kb": 0, 00:12:40.127 "state": "online", 00:12:40.127 "raid_level": "raid1", 00:12:40.127 "superblock": false, 00:12:40.127 "num_base_bdevs": 2, 00:12:40.127 "num_base_bdevs_discovered": 2, 00:12:40.127 "num_base_bdevs_operational": 2, 00:12:40.127 "process": { 00:12:40.127 "type": "rebuild", 00:12:40.127 "target": "spare", 00:12:40.127 "progress": { 00:12:40.127 "blocks": 45056, 00:12:40.127 "percent": 68 00:12:40.127 } 00:12:40.127 }, 00:12:40.127 "base_bdevs_list": [ 00:12:40.127 { 00:12:40.127 "name": "spare", 00:12:40.127 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:40.127 "is_configured": true, 00:12:40.127 "data_offset": 0, 00:12:40.127 "data_size": 65536 00:12:40.127 }, 00:12:40.127 { 00:12:40.127 "name": "BaseBdev2", 00:12:40.127 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:40.128 "is_configured": true, 00:12:40.128 "data_offset": 0, 00:12:40.128 "data_size": 65536 00:12:40.128 } 00:12:40.128 ] 00:12:40.128 }' 00:12:40.128 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:40.128 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:40.128 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:40.128 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:40.128 05:50:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:41.067 [2024-12-12 05:50:48.311049] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:41.067 [2024-12-12 05:50:48.311275] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:41.067 [2024-12-12 05:50:48.311383] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.067 "name": "raid_bdev1", 00:12:41.067 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:41.067 "strip_size_kb": 0, 00:12:41.067 "state": "online", 00:12:41.067 "raid_level": "raid1", 00:12:41.067 "superblock": false, 00:12:41.067 "num_base_bdevs": 2, 00:12:41.067 "num_base_bdevs_discovered": 2, 00:12:41.067 "num_base_bdevs_operational": 2, 00:12:41.067 "base_bdevs_list": [ 00:12:41.067 { 00:12:41.067 "name": "spare", 00:12:41.067 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:41.067 "is_configured": true, 00:12:41.067 "data_offset": 0, 00:12:41.067 "data_size": 65536 00:12:41.067 }, 00:12:41.067 { 00:12:41.067 "name": "BaseBdev2", 00:12:41.067 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:41.067 "is_configured": true, 00:12:41.067 "data_offset": 0, 00:12:41.067 "data_size": 65536 00:12:41.067 } 00:12:41.067 ] 00:12:41.067 }' 00:12:41.067 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:41.327 "name": "raid_bdev1", 00:12:41.327 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:41.327 "strip_size_kb": 0, 00:12:41.327 "state": "online", 00:12:41.327 "raid_level": "raid1", 00:12:41.327 "superblock": false, 00:12:41.327 "num_base_bdevs": 2, 00:12:41.327 "num_base_bdevs_discovered": 2, 00:12:41.327 "num_base_bdevs_operational": 2, 00:12:41.327 "base_bdevs_list": [ 00:12:41.327 { 00:12:41.327 "name": "spare", 00:12:41.327 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:41.327 "is_configured": true, 00:12:41.327 "data_offset": 0, 00:12:41.327 "data_size": 65536 00:12:41.327 }, 00:12:41.327 { 00:12:41.327 "name": "BaseBdev2", 00:12:41.327 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:41.327 "is_configured": true, 00:12:41.327 "data_offset": 0, 00:12:41.327 "data_size": 65536 00:12:41.327 } 00:12:41.327 ] 00:12:41.327 }' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:41.327 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.328 "name": "raid_bdev1", 00:12:41.328 "uuid": "7f61052a-da69-4387-bb2b-084e0db67809", 00:12:41.328 "strip_size_kb": 0, 00:12:41.328 "state": "online", 00:12:41.328 "raid_level": "raid1", 00:12:41.328 "superblock": false, 00:12:41.328 "num_base_bdevs": 2, 00:12:41.328 "num_base_bdevs_discovered": 2, 00:12:41.328 "num_base_bdevs_operational": 2, 00:12:41.328 "base_bdevs_list": [ 00:12:41.328 { 00:12:41.328 "name": "spare", 00:12:41.328 "uuid": "b2070743-7873-5c63-9a28-ba43c6bd8162", 00:12:41.328 "is_configured": true, 00:12:41.328 "data_offset": 0, 00:12:41.328 "data_size": 65536 00:12:41.328 }, 00:12:41.328 { 00:12:41.328 "name": "BaseBdev2", 00:12:41.328 "uuid": "7be4a853-c8a0-54ab-8ea6-f09fd66e99b1", 00:12:41.328 "is_configured": true, 00:12:41.328 "data_offset": 0, 00:12:41.328 "data_size": 65536 00:12:41.328 } 00:12:41.328 ] 00:12:41.328 }' 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.328 05:50:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.898 [2024-12-12 05:50:49.212350] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:41.898 [2024-12-12 05:50:49.212454] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.898 [2024-12-12 05:50:49.212644] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.898 [2024-12-12 05:50:49.212743] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:41.898 [2024-12-12 05:50:49.212757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:41.898 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:42.158 /dev/nbd0 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.158 1+0 records in 00:12:42.158 1+0 records out 00:12:42.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000349043 s, 11.7 MB/s 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.158 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:42.418 /dev/nbd1 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.418 1+0 records in 00:12:42.418 1+0 records out 00:12:42.418 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431577 s, 9.5 MB/s 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.418 05:50:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:42.678 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 76199 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 76199 ']' 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 76199 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76199 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76199' 00:12:42.937 killing process with pid 76199 00:12:42.937 Received shutdown signal, test time was about 60.000000 seconds 00:12:42.937 00:12:42.937 Latency(us) 00:12:42.937 [2024-12-12T05:50:50.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:42.937 [2024-12-12T05:50:50.459Z] =================================================================================================================== 00:12:42.937 [2024-12-12T05:50:50.459Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 76199 00:12:42.937 [2024-12-12 05:50:50.404315] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.937 05:50:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 76199 00:12:43.197 [2024-12-12 05:50:50.689449] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:12:44.580 00:12:44.580 real 0m15.291s 00:12:44.580 user 0m17.030s 00:12:44.580 sys 0m2.958s 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.580 ************************************ 00:12:44.580 END TEST raid_rebuild_test 00:12:44.580 ************************************ 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.580 05:50:51 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:12:44.580 05:50:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:12:44.580 05:50:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.580 05:50:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.580 ************************************ 00:12:44.580 START TEST raid_rebuild_test_sb 00:12:44.580 ************************************ 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=76616 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 76616 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 76616 ']' 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.580 05:50:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.580 I/O size of 3145728 is greater than zero copy threshold (65536). 00:12:44.580 Zero copy mechanism will not be used. 00:12:44.580 [2024-12-12 05:50:51.903662] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:12:44.580 [2024-12-12 05:50:51.903782] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76616 ] 00:12:44.580 [2024-12-12 05:50:52.054001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.840 [2024-12-12 05:50:52.163370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.099 [2024-12-12 05:50:52.360965] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.099 [2024-12-12 05:50:52.361032] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.359 BaseBdev1_malloc 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.359 [2024-12-12 05:50:52.787218] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:12:45.359 [2024-12-12 05:50:52.787287] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.359 [2024-12-12 05:50:52.787311] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:45.359 [2024-12-12 05:50:52.787324] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.359 [2024-12-12 05:50:52.789413] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.359 [2024-12-12 05:50:52.789460] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:45.359 BaseBdev1 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.359 BaseBdev2_malloc 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.359 [2024-12-12 05:50:52.837031] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:12:45.359 [2024-12-12 05:50:52.837157] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.359 [2024-12-12 05:50:52.837180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:45.359 [2024-12-12 05:50:52.837193] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.359 [2024-12-12 05:50:52.839263] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.359 [2024-12-12 05:50:52.839308] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.359 BaseBdev2 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.359 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.619 spare_malloc 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.619 spare_delay 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.619 [2024-12-12 05:50:52.935303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:45.619 [2024-12-12 05:50:52.935372] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.619 [2024-12-12 05:50:52.935395] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:12:45.619 [2024-12-12 05:50:52.935409] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.619 [2024-12-12 05:50:52.937744] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.619 [2024-12-12 05:50:52.937793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:45.619 spare 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.619 [2024-12-12 05:50:52.947351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:45.619 [2024-12-12 05:50:52.949377] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:45.619 [2024-12-12 05:50:52.949601] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:45.619 [2024-12-12 05:50:52.949621] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:45.619 [2024-12-12 05:50:52.949878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:12:45.619 [2024-12-12 05:50:52.950058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:45.619 [2024-12-12 05:50:52.950076] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:45.619 [2024-12-12 05:50:52.950256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:45.619 05:50:52 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.620 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.620 "name": "raid_bdev1", 00:12:45.620 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:45.620 "strip_size_kb": 0, 00:12:45.620 "state": "online", 00:12:45.620 "raid_level": "raid1", 00:12:45.620 "superblock": true, 00:12:45.620 "num_base_bdevs": 2, 00:12:45.620 "num_base_bdevs_discovered": 2, 00:12:45.620 "num_base_bdevs_operational": 2, 00:12:45.620 "base_bdevs_list": [ 00:12:45.620 { 00:12:45.620 "name": "BaseBdev1", 00:12:45.620 "uuid": "354fbc46-e1b3-56e6-896f-01b69064340d", 00:12:45.620 "is_configured": true, 00:12:45.620 "data_offset": 2048, 00:12:45.620 "data_size": 63488 00:12:45.620 }, 00:12:45.620 { 00:12:45.620 "name": "BaseBdev2", 00:12:45.620 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:45.620 "is_configured": true, 00:12:45.620 "data_offset": 2048, 00:12:45.620 "data_size": 63488 00:12:45.620 } 00:12:45.620 ] 00:12:45.620 }' 00:12:45.620 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.620 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.189 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:12:46.190 [2024-12-12 05:50:53.418808] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.190 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:12:46.190 [2024-12-12 05:50:53.694435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:46.450 /dev/nbd0 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.450 1+0 records in 00:12:46.450 1+0 records out 00:12:46.450 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000522356 s, 7.8 MB/s 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:12:46.450 05:50:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:12:50.648 63488+0 records in 00:12:50.648 63488+0 records out 00:12:50.648 32505856 bytes (33 MB, 31 MiB) copied, 4.05187 s, 8.0 MB/s 00:12:50.648 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:12:50.648 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:50.648 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.648 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.649 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:50.649 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.649 05:50:57 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:50.649 [2024-12-12 05:50:58.003784] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.649 [2024-12-12 05:50:58.039816] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.649 "name": "raid_bdev1", 00:12:50.649 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:50.649 "strip_size_kb": 0, 00:12:50.649 "state": "online", 00:12:50.649 "raid_level": "raid1", 00:12:50.649 "superblock": true, 00:12:50.649 "num_base_bdevs": 2, 00:12:50.649 "num_base_bdevs_discovered": 1, 00:12:50.649 "num_base_bdevs_operational": 1, 00:12:50.649 "base_bdevs_list": [ 00:12:50.649 { 00:12:50.649 "name": null, 00:12:50.649 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.649 "is_configured": false, 00:12:50.649 "data_offset": 0, 00:12:50.649 "data_size": 63488 00:12:50.649 }, 00:12:50.649 { 00:12:50.649 "name": "BaseBdev2", 00:12:50.649 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:50.649 "is_configured": true, 00:12:50.649 "data_offset": 2048, 00:12:50.649 "data_size": 63488 00:12:50.649 } 00:12:50.649 ] 00:12:50.649 }' 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.649 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.218 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:51.218 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.218 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.218 [2024-12-12 05:50:58.483158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:51.218 [2024-12-12 05:50:58.500554] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:12:51.218 05:50:58 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.218 [2024-12-12 05:50:58.502470] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:51.218 05:50:58 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.157 "name": "raid_bdev1", 00:12:52.157 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:52.157 "strip_size_kb": 0, 00:12:52.157 "state": "online", 00:12:52.157 "raid_level": "raid1", 00:12:52.157 "superblock": true, 00:12:52.157 "num_base_bdevs": 2, 00:12:52.157 "num_base_bdevs_discovered": 2, 00:12:52.157 "num_base_bdevs_operational": 2, 00:12:52.157 "process": { 00:12:52.157 "type": "rebuild", 00:12:52.157 "target": "spare", 00:12:52.157 "progress": { 00:12:52.157 "blocks": 20480, 00:12:52.157 "percent": 32 00:12:52.157 } 00:12:52.157 }, 00:12:52.157 "base_bdevs_list": [ 00:12:52.157 { 00:12:52.157 "name": "spare", 00:12:52.157 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:52.157 "is_configured": true, 00:12:52.157 "data_offset": 2048, 00:12:52.157 "data_size": 63488 00:12:52.157 }, 00:12:52.157 { 00:12:52.157 "name": "BaseBdev2", 00:12:52.157 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:52.157 "is_configured": true, 00:12:52.157 "data_offset": 2048, 00:12:52.157 "data_size": 63488 00:12:52.157 } 00:12:52.157 ] 00:12:52.157 }' 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:52.157 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.416 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:52.416 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:52.416 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.416 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.416 [2024-12-12 05:50:59.686475] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.417 [2024-12-12 05:50:59.707880] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:12:52.417 [2024-12-12 05:50:59.707946] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.417 [2024-12-12 05:50:59.707963] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:52.417 [2024-12-12 05:50:59.707979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.417 "name": "raid_bdev1", 00:12:52.417 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:52.417 "strip_size_kb": 0, 00:12:52.417 "state": "online", 00:12:52.417 "raid_level": "raid1", 00:12:52.417 "superblock": true, 00:12:52.417 "num_base_bdevs": 2, 00:12:52.417 "num_base_bdevs_discovered": 1, 00:12:52.417 "num_base_bdevs_operational": 1, 00:12:52.417 "base_bdevs_list": [ 00:12:52.417 { 00:12:52.417 "name": null, 00:12:52.417 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.417 "is_configured": false, 00:12:52.417 "data_offset": 0, 00:12:52.417 "data_size": 63488 00:12:52.417 }, 00:12:52.417 { 00:12:52.417 "name": "BaseBdev2", 00:12:52.417 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:52.417 "is_configured": true, 00:12:52.417 "data_offset": 2048, 00:12:52.417 "data_size": 63488 00:12:52.417 } 00:12:52.417 ] 00:12:52.417 }' 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.417 05:50:59 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.677 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:52.937 "name": "raid_bdev1", 00:12:52.937 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:52.937 "strip_size_kb": 0, 00:12:52.937 "state": "online", 00:12:52.937 "raid_level": "raid1", 00:12:52.937 "superblock": true, 00:12:52.937 "num_base_bdevs": 2, 00:12:52.937 "num_base_bdevs_discovered": 1, 00:12:52.937 "num_base_bdevs_operational": 1, 00:12:52.937 "base_bdevs_list": [ 00:12:52.937 { 00:12:52.937 "name": null, 00:12:52.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.937 "is_configured": false, 00:12:52.937 "data_offset": 0, 00:12:52.937 "data_size": 63488 00:12:52.937 }, 00:12:52.937 { 00:12:52.937 "name": "BaseBdev2", 00:12:52.937 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:52.937 "is_configured": true, 00:12:52.937 "data_offset": 2048, 00:12:52.937 "data_size": 63488 00:12:52.937 } 00:12:52.937 ] 00:12:52.937 }' 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.937 [2024-12-12 05:51:00.293226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:52.937 [2024-12-12 05:51:00.309390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.937 05:51:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:12:52.937 [2024-12-12 05:51:00.311500] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:53.877 "name": "raid_bdev1", 00:12:53.877 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:53.877 "strip_size_kb": 0, 00:12:53.877 "state": "online", 00:12:53.877 "raid_level": "raid1", 00:12:53.877 "superblock": true, 00:12:53.877 "num_base_bdevs": 2, 00:12:53.877 "num_base_bdevs_discovered": 2, 00:12:53.877 "num_base_bdevs_operational": 2, 00:12:53.877 "process": { 00:12:53.877 "type": "rebuild", 00:12:53.877 "target": "spare", 00:12:53.877 "progress": { 00:12:53.877 "blocks": 20480, 00:12:53.877 "percent": 32 00:12:53.877 } 00:12:53.877 }, 00:12:53.877 "base_bdevs_list": [ 00:12:53.877 { 00:12:53.877 "name": "spare", 00:12:53.877 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:53.877 "is_configured": true, 00:12:53.877 "data_offset": 2048, 00:12:53.877 "data_size": 63488 00:12:53.877 }, 00:12:53.877 { 00:12:53.877 "name": "BaseBdev2", 00:12:53.877 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:53.877 "is_configured": true, 00:12:53.877 "data_offset": 2048, 00:12:53.877 "data_size": 63488 00:12:53.877 } 00:12:53.877 ] 00:12:53.877 }' 00:12:53.877 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:12:54.137 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=375 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:54.137 "name": "raid_bdev1", 00:12:54.137 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:54.137 "strip_size_kb": 0, 00:12:54.137 "state": "online", 00:12:54.137 "raid_level": "raid1", 00:12:54.137 "superblock": true, 00:12:54.137 "num_base_bdevs": 2, 00:12:54.137 "num_base_bdevs_discovered": 2, 00:12:54.137 "num_base_bdevs_operational": 2, 00:12:54.137 "process": { 00:12:54.137 "type": "rebuild", 00:12:54.137 "target": "spare", 00:12:54.137 "progress": { 00:12:54.137 "blocks": 22528, 00:12:54.137 "percent": 35 00:12:54.137 } 00:12:54.137 }, 00:12:54.137 "base_bdevs_list": [ 00:12:54.137 { 00:12:54.137 "name": "spare", 00:12:54.137 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:54.137 "is_configured": true, 00:12:54.137 "data_offset": 2048, 00:12:54.137 "data_size": 63488 00:12:54.137 }, 00:12:54.137 { 00:12:54.137 "name": "BaseBdev2", 00:12:54.137 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:54.137 "is_configured": true, 00:12:54.137 "data_offset": 2048, 00:12:54.137 "data_size": 63488 00:12:54.137 } 00:12:54.137 ] 00:12:54.137 }' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:54.137 05:51:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.077 05:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:55.337 "name": "raid_bdev1", 00:12:55.337 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:55.337 "strip_size_kb": 0, 00:12:55.337 "state": "online", 00:12:55.337 "raid_level": "raid1", 00:12:55.337 "superblock": true, 00:12:55.337 "num_base_bdevs": 2, 00:12:55.337 "num_base_bdevs_discovered": 2, 00:12:55.337 "num_base_bdevs_operational": 2, 00:12:55.337 "process": { 00:12:55.337 "type": "rebuild", 00:12:55.337 "target": "spare", 00:12:55.337 "progress": { 00:12:55.337 "blocks": 45056, 00:12:55.337 "percent": 70 00:12:55.337 } 00:12:55.337 }, 00:12:55.337 "base_bdevs_list": [ 00:12:55.337 { 00:12:55.337 "name": "spare", 00:12:55.337 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:55.337 "is_configured": true, 00:12:55.337 "data_offset": 2048, 00:12:55.337 "data_size": 63488 00:12:55.337 }, 00:12:55.337 { 00:12:55.337 "name": "BaseBdev2", 00:12:55.337 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:55.337 "is_configured": true, 00:12:55.337 "data_offset": 2048, 00:12:55.337 "data_size": 63488 00:12:55.337 } 00:12:55.337 ] 00:12:55.337 }' 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:12:55.337 05:51:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:12:55.907 [2024-12-12 05:51:03.424844] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:12:55.907 [2024-12-12 05:51:03.424994] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:12:55.907 [2024-12-12 05:51:03.425124] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.477 "name": "raid_bdev1", 00:12:56.477 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:56.477 "strip_size_kb": 0, 00:12:56.477 "state": "online", 00:12:56.477 "raid_level": "raid1", 00:12:56.477 "superblock": true, 00:12:56.477 "num_base_bdevs": 2, 00:12:56.477 "num_base_bdevs_discovered": 2, 00:12:56.477 "num_base_bdevs_operational": 2, 00:12:56.477 "base_bdevs_list": [ 00:12:56.477 { 00:12:56.477 "name": "spare", 00:12:56.477 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:56.477 "is_configured": true, 00:12:56.477 "data_offset": 2048, 00:12:56.477 "data_size": 63488 00:12:56.477 }, 00:12:56.477 { 00:12:56.477 "name": "BaseBdev2", 00:12:56.477 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:56.477 "is_configured": true, 00:12:56.477 "data_offset": 2048, 00:12:56.477 "data_size": 63488 00:12:56.477 } 00:12:56.477 ] 00:12:56.477 }' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:56.477 "name": "raid_bdev1", 00:12:56.477 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:56.477 "strip_size_kb": 0, 00:12:56.477 "state": "online", 00:12:56.477 "raid_level": "raid1", 00:12:56.477 "superblock": true, 00:12:56.477 "num_base_bdevs": 2, 00:12:56.477 "num_base_bdevs_discovered": 2, 00:12:56.477 "num_base_bdevs_operational": 2, 00:12:56.477 "base_bdevs_list": [ 00:12:56.477 { 00:12:56.477 "name": "spare", 00:12:56.477 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:56.477 "is_configured": true, 00:12:56.477 "data_offset": 2048, 00:12:56.477 "data_size": 63488 00:12:56.477 }, 00:12:56.477 { 00:12:56.477 "name": "BaseBdev2", 00:12:56.477 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:56.477 "is_configured": true, 00:12:56.477 "data_offset": 2048, 00:12:56.477 "data_size": 63488 00:12:56.477 } 00:12:56.477 ] 00:12:56.477 }' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.477 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.736 05:51:03 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.736 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.736 "name": "raid_bdev1", 00:12:56.736 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:56.736 "strip_size_kb": 0, 00:12:56.736 "state": "online", 00:12:56.736 "raid_level": "raid1", 00:12:56.736 "superblock": true, 00:12:56.736 "num_base_bdevs": 2, 00:12:56.736 "num_base_bdevs_discovered": 2, 00:12:56.736 "num_base_bdevs_operational": 2, 00:12:56.736 "base_bdevs_list": [ 00:12:56.736 { 00:12:56.736 "name": "spare", 00:12:56.736 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:56.736 "is_configured": true, 00:12:56.736 "data_offset": 2048, 00:12:56.736 "data_size": 63488 00:12:56.736 }, 00:12:56.736 { 00:12:56.736 "name": "BaseBdev2", 00:12:56.736 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:56.736 "is_configured": true, 00:12:56.736 "data_offset": 2048, 00:12:56.736 "data_size": 63488 00:12:56.736 } 00:12:56.736 ] 00:12:56.736 }' 00:12:56.736 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.736 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 [2024-12-12 05:51:04.437053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:56.995 [2024-12-12 05:51:04.437147] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:56.995 [2024-12-12 05:51:04.437261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:56.995 [2024-12-12 05:51:04.437356] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:56.995 [2024-12-12 05:51:04.437460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:56.995 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:12:57.254 /dev/nbd0 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.254 1+0 records in 00:12:57.254 1+0 records out 00:12:57.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458686 s, 8.9 MB/s 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.254 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:12:57.513 /dev/nbd1 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:57.513 1+0 records in 00:12:57.513 1+0 records out 00:12:57.513 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000304873 s, 13.4 MB/s 00:12:57.513 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:12:57.514 05:51:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:57.773 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:58.032 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.317 [2024-12-12 05:51:05.607044] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:12:58.317 [2024-12-12 05:51:05.607109] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:58.317 [2024-12-12 05:51:05.607139] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:58.317 [2024-12-12 05:51:05.607150] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:58.317 [2024-12-12 05:51:05.609337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:58.317 [2024-12-12 05:51:05.609381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:12:58.317 [2024-12-12 05:51:05.609483] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:12:58.317 [2024-12-12 05:51:05.609564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:58.317 [2024-12-12 05:51:05.609719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:58.317 spare 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.317 [2024-12-12 05:51:05.709633] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:12:58.317 [2024-12-12 05:51:05.709712] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:58.317 [2024-12-12 05:51:05.710034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:12:58.317 [2024-12-12 05:51:05.710221] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:12:58.317 [2024-12-12 05:51:05.710233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:12:58.317 [2024-12-12 05:51:05.710426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.317 "name": "raid_bdev1", 00:12:58.317 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:58.317 "strip_size_kb": 0, 00:12:58.317 "state": "online", 00:12:58.317 "raid_level": "raid1", 00:12:58.317 "superblock": true, 00:12:58.317 "num_base_bdevs": 2, 00:12:58.317 "num_base_bdevs_discovered": 2, 00:12:58.317 "num_base_bdevs_operational": 2, 00:12:58.317 "base_bdevs_list": [ 00:12:58.317 { 00:12:58.317 "name": "spare", 00:12:58.317 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:58.317 "is_configured": true, 00:12:58.317 "data_offset": 2048, 00:12:58.317 "data_size": 63488 00:12:58.317 }, 00:12:58.317 { 00:12:58.317 "name": "BaseBdev2", 00:12:58.317 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:58.317 "is_configured": true, 00:12:58.317 "data_offset": 2048, 00:12:58.317 "data_size": 63488 00:12:58.317 } 00:12:58.317 ] 00:12:58.317 }' 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.317 05:51:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:12:58.887 "name": "raid_bdev1", 00:12:58.887 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:58.887 "strip_size_kb": 0, 00:12:58.887 "state": "online", 00:12:58.887 "raid_level": "raid1", 00:12:58.887 "superblock": true, 00:12:58.887 "num_base_bdevs": 2, 00:12:58.887 "num_base_bdevs_discovered": 2, 00:12:58.887 "num_base_bdevs_operational": 2, 00:12:58.887 "base_bdevs_list": [ 00:12:58.887 { 00:12:58.887 "name": "spare", 00:12:58.887 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:12:58.887 "is_configured": true, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 }, 00:12:58.887 { 00:12:58.887 "name": "BaseBdev2", 00:12:58.887 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:58.887 "is_configured": true, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 } 00:12:58.887 ] 00:12:58.887 }' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.887 [2024-12-12 05:51:06.349979] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.887 "name": "raid_bdev1", 00:12:58.887 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:12:58.887 "strip_size_kb": 0, 00:12:58.887 "state": "online", 00:12:58.887 "raid_level": "raid1", 00:12:58.887 "superblock": true, 00:12:58.887 "num_base_bdevs": 2, 00:12:58.887 "num_base_bdevs_discovered": 1, 00:12:58.887 "num_base_bdevs_operational": 1, 00:12:58.887 "base_bdevs_list": [ 00:12:58.887 { 00:12:58.887 "name": null, 00:12:58.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:58.887 "is_configured": false, 00:12:58.887 "data_offset": 0, 00:12:58.887 "data_size": 63488 00:12:58.887 }, 00:12:58.887 { 00:12:58.887 "name": "BaseBdev2", 00:12:58.887 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:12:58.887 "is_configured": true, 00:12:58.887 "data_offset": 2048, 00:12:58.887 "data_size": 63488 00:12:58.887 } 00:12:58.887 ] 00:12:58.887 }' 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.887 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:12:59.456 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.456 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.456 [2024-12-12 05:51:06.801293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.456 [2024-12-12 05:51:06.801533] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:12:59.456 [2024-12-12 05:51:06.801554] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:12:59.456 [2024-12-12 05:51:06.801601] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:12:59.456 [2024-12-12 05:51:06.817070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:12:59.456 05:51:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.456 05:51:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:12:59.456 [2024-12-12 05:51:06.818951] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:00.404 "name": "raid_bdev1", 00:13:00.404 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:00.404 "strip_size_kb": 0, 00:13:00.404 "state": "online", 00:13:00.404 "raid_level": "raid1", 00:13:00.404 "superblock": true, 00:13:00.404 "num_base_bdevs": 2, 00:13:00.404 "num_base_bdevs_discovered": 2, 00:13:00.404 "num_base_bdevs_operational": 2, 00:13:00.404 "process": { 00:13:00.404 "type": "rebuild", 00:13:00.404 "target": "spare", 00:13:00.404 "progress": { 00:13:00.404 "blocks": 20480, 00:13:00.404 "percent": 32 00:13:00.404 } 00:13:00.404 }, 00:13:00.404 "base_bdevs_list": [ 00:13:00.404 { 00:13:00.404 "name": "spare", 00:13:00.404 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:13:00.404 "is_configured": true, 00:13:00.404 "data_offset": 2048, 00:13:00.404 "data_size": 63488 00:13:00.404 }, 00:13:00.404 { 00:13:00.404 "name": "BaseBdev2", 00:13:00.404 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:00.404 "is_configured": true, 00:13:00.404 "data_offset": 2048, 00:13:00.404 "data_size": 63488 00:13:00.404 } 00:13:00.404 ] 00:13:00.404 }' 00:13:00.404 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.675 05:51:07 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.675 [2024-12-12 05:51:07.986584] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.675 [2024-12-12 05:51:08.024350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:00.675 [2024-12-12 05:51:08.024490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.675 [2024-12-12 05:51:08.024524] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:00.675 [2024-12-12 05:51:08.024553] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.675 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.676 "name": "raid_bdev1", 00:13:00.676 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:00.676 "strip_size_kb": 0, 00:13:00.676 "state": "online", 00:13:00.676 "raid_level": "raid1", 00:13:00.676 "superblock": true, 00:13:00.676 "num_base_bdevs": 2, 00:13:00.676 "num_base_bdevs_discovered": 1, 00:13:00.676 "num_base_bdevs_operational": 1, 00:13:00.676 "base_bdevs_list": [ 00:13:00.676 { 00:13:00.676 "name": null, 00:13:00.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.676 "is_configured": false, 00:13:00.676 "data_offset": 0, 00:13:00.676 "data_size": 63488 00:13:00.676 }, 00:13:00.676 { 00:13:00.676 "name": "BaseBdev2", 00:13:00.676 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:00.676 "is_configured": true, 00:13:00.676 "data_offset": 2048, 00:13:00.676 "data_size": 63488 00:13:00.676 } 00:13:00.676 ] 00:13:00.676 }' 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.676 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.936 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:00.936 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.936 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.936 [2024-12-12 05:51:08.454264] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:00.936 [2024-12-12 05:51:08.454405] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:00.936 [2024-12-12 05:51:08.454434] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:00.936 [2024-12-12 05:51:08.454448] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:00.936 [2024-12-12 05:51:08.454984] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:00.936 [2024-12-12 05:51:08.455011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:00.936 [2024-12-12 05:51:08.455116] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:00.936 [2024-12-12 05:51:08.455132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:00.936 [2024-12-12 05:51:08.455144] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:00.936 [2024-12-12 05:51:08.455170] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:01.195 [2024-12-12 05:51:08.470955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:01.195 spare 00:13:01.195 05:51:08 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.195 05:51:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:01.195 [2024-12-12 05:51:08.472815] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.134 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.134 "name": "raid_bdev1", 00:13:02.135 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:02.135 "strip_size_kb": 0, 00:13:02.135 "state": "online", 00:13:02.135 "raid_level": "raid1", 00:13:02.135 "superblock": true, 00:13:02.135 "num_base_bdevs": 2, 00:13:02.135 "num_base_bdevs_discovered": 2, 00:13:02.135 "num_base_bdevs_operational": 2, 00:13:02.135 "process": { 00:13:02.135 "type": "rebuild", 00:13:02.135 "target": "spare", 00:13:02.135 "progress": { 00:13:02.135 "blocks": 20480, 00:13:02.135 "percent": 32 00:13:02.135 } 00:13:02.135 }, 00:13:02.135 "base_bdevs_list": [ 00:13:02.135 { 00:13:02.135 "name": "spare", 00:13:02.135 "uuid": "ab9908b6-3ad4-50a9-8531-97cbf5a6d498", 00:13:02.135 "is_configured": true, 00:13:02.135 "data_offset": 2048, 00:13:02.135 "data_size": 63488 00:13:02.135 }, 00:13:02.135 { 00:13:02.135 "name": "BaseBdev2", 00:13:02.135 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:02.135 "is_configured": true, 00:13:02.135 "data_offset": 2048, 00:13:02.135 "data_size": 63488 00:13:02.135 } 00:13:02.135 ] 00:13:02.135 }' 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.135 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.135 [2024-12-12 05:51:09.608456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.395 [2024-12-12 05:51:09.678164] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:02.395 [2024-12-12 05:51:09.678253] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.395 [2024-12-12 05:51:09.678274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:02.395 [2024-12-12 05:51:09.678283] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.395 "name": "raid_bdev1", 00:13:02.395 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:02.395 "strip_size_kb": 0, 00:13:02.395 "state": "online", 00:13:02.395 "raid_level": "raid1", 00:13:02.395 "superblock": true, 00:13:02.395 "num_base_bdevs": 2, 00:13:02.395 "num_base_bdevs_discovered": 1, 00:13:02.395 "num_base_bdevs_operational": 1, 00:13:02.395 "base_bdevs_list": [ 00:13:02.395 { 00:13:02.395 "name": null, 00:13:02.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.395 "is_configured": false, 00:13:02.395 "data_offset": 0, 00:13:02.395 "data_size": 63488 00:13:02.395 }, 00:13:02.395 { 00:13:02.395 "name": "BaseBdev2", 00:13:02.395 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:02.395 "is_configured": true, 00:13:02.395 "data_offset": 2048, 00:13:02.395 "data_size": 63488 00:13:02.395 } 00:13:02.395 ] 00:13:02.395 }' 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.395 05:51:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.655 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:02.916 "name": "raid_bdev1", 00:13:02.916 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:02.916 "strip_size_kb": 0, 00:13:02.916 "state": "online", 00:13:02.916 "raid_level": "raid1", 00:13:02.916 "superblock": true, 00:13:02.916 "num_base_bdevs": 2, 00:13:02.916 "num_base_bdevs_discovered": 1, 00:13:02.916 "num_base_bdevs_operational": 1, 00:13:02.916 "base_bdevs_list": [ 00:13:02.916 { 00:13:02.916 "name": null, 00:13:02.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.916 "is_configured": false, 00:13:02.916 "data_offset": 0, 00:13:02.916 "data_size": 63488 00:13:02.916 }, 00:13:02.916 { 00:13:02.916 "name": "BaseBdev2", 00:13:02.916 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:02.916 "is_configured": true, 00:13:02.916 "data_offset": 2048, 00:13:02.916 "data_size": 63488 00:13:02.916 } 00:13:02.916 ] 00:13:02.916 }' 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:02.916 [2024-12-12 05:51:10.308780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:02.916 [2024-12-12 05:51:10.308893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:02.916 [2024-12-12 05:51:10.308939] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:02.916 [2024-12-12 05:51:10.308961] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:02.916 [2024-12-12 05:51:10.309446] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:02.916 [2024-12-12 05:51:10.309484] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:02.916 [2024-12-12 05:51:10.309610] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:02.916 [2024-12-12 05:51:10.309628] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:02.916 [2024-12-12 05:51:10.309639] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:02.916 [2024-12-12 05:51:10.309650] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:02.916 BaseBdev1 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.916 05:51:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:03.855 "name": "raid_bdev1", 00:13:03.855 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:03.855 "strip_size_kb": 0, 00:13:03.855 "state": "online", 00:13:03.855 "raid_level": "raid1", 00:13:03.855 "superblock": true, 00:13:03.855 "num_base_bdevs": 2, 00:13:03.855 "num_base_bdevs_discovered": 1, 00:13:03.855 "num_base_bdevs_operational": 1, 00:13:03.855 "base_bdevs_list": [ 00:13:03.855 { 00:13:03.855 "name": null, 00:13:03.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.855 "is_configured": false, 00:13:03.855 "data_offset": 0, 00:13:03.855 "data_size": 63488 00:13:03.855 }, 00:13:03.855 { 00:13:03.855 "name": "BaseBdev2", 00:13:03.855 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:03.855 "is_configured": true, 00:13:03.855 "data_offset": 2048, 00:13:03.855 "data_size": 63488 00:13:03.855 } 00:13:03.855 ] 00:13:03.855 }' 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:03.855 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:04.425 "name": "raid_bdev1", 00:13:04.425 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:04.425 "strip_size_kb": 0, 00:13:04.425 "state": "online", 00:13:04.425 "raid_level": "raid1", 00:13:04.425 "superblock": true, 00:13:04.425 "num_base_bdevs": 2, 00:13:04.425 "num_base_bdevs_discovered": 1, 00:13:04.425 "num_base_bdevs_operational": 1, 00:13:04.425 "base_bdevs_list": [ 00:13:04.425 { 00:13:04.425 "name": null, 00:13:04.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.425 "is_configured": false, 00:13:04.425 "data_offset": 0, 00:13:04.425 "data_size": 63488 00:13:04.425 }, 00:13:04.425 { 00:13:04.425 "name": "BaseBdev2", 00:13:04.425 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:04.425 "is_configured": true, 00:13:04.425 "data_offset": 2048, 00:13:04.425 "data_size": 63488 00:13:04.425 } 00:13:04.425 ] 00:13:04.425 }' 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:04.425 [2024-12-12 05:51:11.898514] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:04.425 [2024-12-12 05:51:11.898765] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:04.425 [2024-12-12 05:51:11.898836] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:04.425 request: 00:13:04.425 { 00:13:04.425 "base_bdev": "BaseBdev1", 00:13:04.425 "raid_bdev": "raid_bdev1", 00:13:04.425 "method": "bdev_raid_add_base_bdev", 00:13:04.425 "req_id": 1 00:13:04.425 } 00:13:04.425 Got JSON-RPC error response 00:13:04.425 response: 00:13:04.425 { 00:13:04.425 "code": -22, 00:13:04.425 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:04.425 } 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:04.425 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:13:04.426 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:04.426 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:04.426 05:51:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:04.426 05:51:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.807 "name": "raid_bdev1", 00:13:05.807 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:05.807 "strip_size_kb": 0, 00:13:05.807 "state": "online", 00:13:05.807 "raid_level": "raid1", 00:13:05.807 "superblock": true, 00:13:05.807 "num_base_bdevs": 2, 00:13:05.807 "num_base_bdevs_discovered": 1, 00:13:05.807 "num_base_bdevs_operational": 1, 00:13:05.807 "base_bdevs_list": [ 00:13:05.807 { 00:13:05.807 "name": null, 00:13:05.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.807 "is_configured": false, 00:13:05.807 "data_offset": 0, 00:13:05.807 "data_size": 63488 00:13:05.807 }, 00:13:05.807 { 00:13:05.807 "name": "BaseBdev2", 00:13:05.807 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:05.807 "is_configured": true, 00:13:05.807 "data_offset": 2048, 00:13:05.807 "data_size": 63488 00:13:05.807 } 00:13:05.807 ] 00:13:05.807 }' 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.807 05:51:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:06.066 "name": "raid_bdev1", 00:13:06.066 "uuid": "e614571b-cf39-4d94-8f1c-b9f9f790113b", 00:13:06.066 "strip_size_kb": 0, 00:13:06.066 "state": "online", 00:13:06.066 "raid_level": "raid1", 00:13:06.066 "superblock": true, 00:13:06.066 "num_base_bdevs": 2, 00:13:06.066 "num_base_bdevs_discovered": 1, 00:13:06.066 "num_base_bdevs_operational": 1, 00:13:06.066 "base_bdevs_list": [ 00:13:06.066 { 00:13:06.066 "name": null, 00:13:06.066 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.066 "is_configured": false, 00:13:06.066 "data_offset": 0, 00:13:06.066 "data_size": 63488 00:13:06.066 }, 00:13:06.066 { 00:13:06.066 "name": "BaseBdev2", 00:13:06.066 "uuid": "bafdea20-cebf-55b7-be2e-7ffa764793a6", 00:13:06.066 "is_configured": true, 00:13:06.066 "data_offset": 2048, 00:13:06.066 "data_size": 63488 00:13:06.066 } 00:13:06.066 ] 00:13:06.066 }' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 76616 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 76616 ']' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 76616 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76616 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76616' 00:13:06.066 killing process with pid 76616 00:13:06.066 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 76616 00:13:06.066 Received shutdown signal, test time was about 60.000000 seconds 00:13:06.066 00:13:06.066 Latency(us) 00:13:06.066 [2024-12-12T05:51:13.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.066 [2024-12-12T05:51:13.588Z] =================================================================================================================== 00:13:06.067 [2024-12-12T05:51:13.589Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:06.067 [2024-12-12 05:51:13.471426] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:06.067 [2024-12-12 05:51:13.471619] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:06.067 05:51:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 76616 00:13:06.067 [2024-12-12 05:51:13.471688] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:06.067 [2024-12-12 05:51:13.471726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:06.325 [2024-12-12 05:51:13.794646] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:07.705 05:51:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:07.705 00:13:07.705 real 0m23.116s 00:13:07.705 user 0m27.916s 00:13:07.705 sys 0m3.556s 00:13:07.705 05:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:07.705 ************************************ 00:13:07.705 END TEST raid_rebuild_test_sb 00:13:07.705 ************************************ 00:13:07.705 05:51:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:07.705 05:51:14 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:07.705 05:51:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:07.705 05:51:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.705 05:51:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:07.705 ************************************ 00:13:07.705 START TEST raid_rebuild_test_io 00:13:07.705 ************************************ 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.705 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77347 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77347 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 77347 ']' 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.706 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:07.706 [2024-12-12 05:51:15.097392] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:13:07.706 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:07.706 Zero copy mechanism will not be used. 00:13:07.706 [2024-12-12 05:51:15.097956] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77347 ] 00:13:07.965 [2024-12-12 05:51:15.268166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.965 [2024-12-12 05:51:15.374303] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.224 [2024-12-12 05:51:15.565103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.224 [2024-12-12 05:51:15.565159] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.483 BaseBdev1_malloc 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.483 [2024-12-12 05:51:15.954922] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:08.483 [2024-12-12 05:51:15.954980] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.483 [2024-12-12 05:51:15.955003] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:08.483 [2024-12-12 05:51:15.955013] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.483 [2024-12-12 05:51:15.957081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.483 [2024-12-12 05:51:15.957118] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:08.483 BaseBdev1 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.483 05:51:15 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.483 BaseBdev2_malloc 00:13:08.483 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.483 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:08.483 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.483 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.743 [2024-12-12 05:51:16.008881] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:08.743 [2024-12-12 05:51:16.008947] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.743 [2024-12-12 05:51:16.008965] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:08.743 [2024-12-12 05:51:16.008977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.743 [2024-12-12 05:51:16.010946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.743 [2024-12-12 05:51:16.010981] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:08.743 BaseBdev2 00:13:08.743 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.743 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.744 spare_malloc 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.744 spare_delay 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.744 [2024-12-12 05:51:16.107244] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:08.744 [2024-12-12 05:51:16.107298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:08.744 [2024-12-12 05:51:16.107318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:08.744 [2024-12-12 05:51:16.107329] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:08.744 [2024-12-12 05:51:16.109304] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:08.744 [2024-12-12 05:51:16.109340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:08.744 spare 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.744 [2024-12-12 05:51:16.115277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:08.744 [2024-12-12 05:51:16.116990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:08.744 [2024-12-12 05:51:16.117081] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:08.744 [2024-12-12 05:51:16.117095] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:08.744 [2024-12-12 05:51:16.117361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:08.744 [2024-12-12 05:51:16.117553] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:08.744 [2024-12-12 05:51:16.117574] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:08.744 [2024-12-12 05:51:16.117728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:08.744 "name": "raid_bdev1", 00:13:08.744 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:08.744 "strip_size_kb": 0, 00:13:08.744 "state": "online", 00:13:08.744 "raid_level": "raid1", 00:13:08.744 "superblock": false, 00:13:08.744 "num_base_bdevs": 2, 00:13:08.744 "num_base_bdevs_discovered": 2, 00:13:08.744 "num_base_bdevs_operational": 2, 00:13:08.744 "base_bdevs_list": [ 00:13:08.744 { 00:13:08.744 "name": "BaseBdev1", 00:13:08.744 "uuid": "470cfb11-4dd1-512d-9069-a8cb1a2a7ed8", 00:13:08.744 "is_configured": true, 00:13:08.744 "data_offset": 0, 00:13:08.744 "data_size": 65536 00:13:08.744 }, 00:13:08.744 { 00:13:08.744 "name": "BaseBdev2", 00:13:08.744 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:08.744 "is_configured": true, 00:13:08.744 "data_offset": 0, 00:13:08.744 "data_size": 65536 00:13:08.744 } 00:13:08.744 ] 00:13:08.744 }' 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:08.744 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:09.314 [2024-12-12 05:51:16.574764] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 [2024-12-12 05:51:16.654457] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:09.314 "name": "raid_bdev1", 00:13:09.314 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:09.314 "strip_size_kb": 0, 00:13:09.314 "state": "online", 00:13:09.314 "raid_level": "raid1", 00:13:09.314 "superblock": false, 00:13:09.314 "num_base_bdevs": 2, 00:13:09.314 "num_base_bdevs_discovered": 1, 00:13:09.314 "num_base_bdevs_operational": 1, 00:13:09.314 "base_bdevs_list": [ 00:13:09.314 { 00:13:09.314 "name": null, 00:13:09.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.314 "is_configured": false, 00:13:09.314 "data_offset": 0, 00:13:09.314 "data_size": 65536 00:13:09.314 }, 00:13:09.314 { 00:13:09.314 "name": "BaseBdev2", 00:13:09.314 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:09.314 "is_configured": true, 00:13:09.314 "data_offset": 0, 00:13:09.314 "data_size": 65536 00:13:09.314 } 00:13:09.314 ] 00:13:09.314 }' 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:09.314 05:51:16 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.314 [2024-12-12 05:51:16.750930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:09.314 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:09.314 Zero copy mechanism will not be used. 00:13:09.315 Running I/O for 60 seconds... 00:13:09.574 05:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:09.574 05:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.574 05:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:09.840 [2024-12-12 05:51:17.096582] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:09.840 05:51:17 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.840 05:51:17 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:09.840 [2024-12-12 05:51:17.149451] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:09.840 [2024-12-12 05:51:17.151294] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:09.840 [2024-12-12 05:51:17.287007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:10.106 [2024-12-12 05:51:17.411852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.106 [2024-12-12 05:51:17.412165] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:10.366 [2024-12-12 05:51:17.742170] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:10.625 237.00 IOPS, 711.00 MiB/s [2024-12-12T05:51:18.147Z] [2024-12-12 05:51:17.956039] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.625 [2024-12-12 05:51:17.956353] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.625 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:10.885 "name": "raid_bdev1", 00:13:10.885 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:10.885 "strip_size_kb": 0, 00:13:10.885 "state": "online", 00:13:10.885 "raid_level": "raid1", 00:13:10.885 "superblock": false, 00:13:10.885 "num_base_bdevs": 2, 00:13:10.885 "num_base_bdevs_discovered": 2, 00:13:10.885 "num_base_bdevs_operational": 2, 00:13:10.885 "process": { 00:13:10.885 "type": "rebuild", 00:13:10.885 "target": "spare", 00:13:10.885 "progress": { 00:13:10.885 "blocks": 10240, 00:13:10.885 "percent": 15 00:13:10.885 } 00:13:10.885 }, 00:13:10.885 "base_bdevs_list": [ 00:13:10.885 { 00:13:10.885 "name": "spare", 00:13:10.885 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:10.885 "is_configured": true, 00:13:10.885 "data_offset": 0, 00:13:10.885 "data_size": 65536 00:13:10.885 }, 00:13:10.885 { 00:13:10.885 "name": "BaseBdev2", 00:13:10.885 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:10.885 "is_configured": true, 00:13:10.885 "data_offset": 0, 00:13:10.885 "data_size": 65536 00:13:10.885 } 00:13:10.885 ] 00:13:10.885 }' 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.885 [2024-12-12 05:51:18.284727] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.885 [2024-12-12 05:51:18.284796] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:10.885 [2024-12-12 05:51:18.286249] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:10.885 [2024-12-12 05:51:18.294229] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.885 [2024-12-12 05:51:18.294274] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:10.885 [2024-12-12 05:51:18.294287] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:10.885 [2024-12-12 05:51:18.342725] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.885 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.885 "name": "raid_bdev1", 00:13:10.885 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:10.885 "strip_size_kb": 0, 00:13:10.885 "state": "online", 00:13:10.885 "raid_level": "raid1", 00:13:10.885 "superblock": false, 00:13:10.885 "num_base_bdevs": 2, 00:13:10.885 "num_base_bdevs_discovered": 1, 00:13:10.885 "num_base_bdevs_operational": 1, 00:13:10.886 "base_bdevs_list": [ 00:13:10.886 { 00:13:10.886 "name": null, 00:13:10.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.886 "is_configured": false, 00:13:10.886 "data_offset": 0, 00:13:10.886 "data_size": 65536 00:13:10.886 }, 00:13:10.886 { 00:13:10.886 "name": "BaseBdev2", 00:13:10.886 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:10.886 "is_configured": true, 00:13:10.886 "data_offset": 0, 00:13:10.886 "data_size": 65536 00:13:10.886 } 00:13:10.886 ] 00:13:10.886 }' 00:13:10.886 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.886 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.455 207.00 IOPS, 621.00 MiB/s [2024-12-12T05:51:18.977Z] 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:11.455 "name": "raid_bdev1", 00:13:11.455 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:11.455 "strip_size_kb": 0, 00:13:11.455 "state": "online", 00:13:11.455 "raid_level": "raid1", 00:13:11.455 "superblock": false, 00:13:11.455 "num_base_bdevs": 2, 00:13:11.455 "num_base_bdevs_discovered": 1, 00:13:11.455 "num_base_bdevs_operational": 1, 00:13:11.455 "base_bdevs_list": [ 00:13:11.455 { 00:13:11.455 "name": null, 00:13:11.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.455 "is_configured": false, 00:13:11.455 "data_offset": 0, 00:13:11.455 "data_size": 65536 00:13:11.455 }, 00:13:11.455 { 00:13:11.455 "name": "BaseBdev2", 00:13:11.455 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:11.455 "is_configured": true, 00:13:11.455 "data_offset": 0, 00:13:11.455 "data_size": 65536 00:13:11.455 } 00:13:11.455 ] 00:13:11.455 }' 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:11.455 [2024-12-12 05:51:18.875648] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.455 05:51:18 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:11.455 [2024-12-12 05:51:18.924138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:11.455 [2024-12-12 05:51:18.926026] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:11.714 [2024-12-12 05:51:19.033595] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.714 [2024-12-12 05:51:19.034000] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:11.714 [2024-12-12 05:51:19.141711] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:11.714 [2024-12-12 05:51:19.142029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:11.972 [2024-12-12 05:51:19.476167] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:12.232 [2024-12-12 05:51:19.578092] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:12.491 199.00 IOPS, 597.00 MiB/s [2024-12-12T05:51:20.013Z] [2024-12-12 05:51:19.781431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:12.491 [2024-12-12 05:51:19.781920] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.491 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.491 "name": "raid_bdev1", 00:13:12.491 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:12.491 "strip_size_kb": 0, 00:13:12.491 "state": "online", 00:13:12.491 "raid_level": "raid1", 00:13:12.491 "superblock": false, 00:13:12.491 "num_base_bdevs": 2, 00:13:12.491 "num_base_bdevs_discovered": 2, 00:13:12.491 "num_base_bdevs_operational": 2, 00:13:12.491 "process": { 00:13:12.492 "type": "rebuild", 00:13:12.492 "target": "spare", 00:13:12.492 "progress": { 00:13:12.492 "blocks": 14336, 00:13:12.492 "percent": 21 00:13:12.492 } 00:13:12.492 }, 00:13:12.492 "base_bdevs_list": [ 00:13:12.492 { 00:13:12.492 "name": "spare", 00:13:12.492 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:12.492 "is_configured": true, 00:13:12.492 "data_offset": 0, 00:13:12.492 "data_size": 65536 00:13:12.492 }, 00:13:12.492 { 00:13:12.492 "name": "BaseBdev2", 00:13:12.492 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:12.492 "is_configured": true, 00:13:12.492 "data_offset": 0, 00:13:12.492 "data_size": 65536 00:13:12.492 } 00:13:12.492 ] 00:13:12.492 }' 00:13:12.492 05:51:19 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.492 [2024-12-12 05:51:20.007111] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=394 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:12.752 "name": "raid_bdev1", 00:13:12.752 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:12.752 "strip_size_kb": 0, 00:13:12.752 "state": "online", 00:13:12.752 "raid_level": "raid1", 00:13:12.752 "superblock": false, 00:13:12.752 "num_base_bdevs": 2, 00:13:12.752 "num_base_bdevs_discovered": 2, 00:13:12.752 "num_base_bdevs_operational": 2, 00:13:12.752 "process": { 00:13:12.752 "type": "rebuild", 00:13:12.752 "target": "spare", 00:13:12.752 "progress": { 00:13:12.752 "blocks": 16384, 00:13:12.752 "percent": 25 00:13:12.752 } 00:13:12.752 }, 00:13:12.752 "base_bdevs_list": [ 00:13:12.752 { 00:13:12.752 "name": "spare", 00:13:12.752 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:12.752 "is_configured": true, 00:13:12.752 "data_offset": 0, 00:13:12.752 "data_size": 65536 00:13:12.752 }, 00:13:12.752 { 00:13:12.752 "name": "BaseBdev2", 00:13:12.752 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:12.752 "is_configured": true, 00:13:12.752 "data_offset": 0, 00:13:12.752 "data_size": 65536 00:13:12.752 } 00:13:12.752 ] 00:13:12.752 }' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:12.752 05:51:20 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:12.752 [2024-12-12 05:51:20.234742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:13.011 [2024-12-12 05:51:20.349695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:13.271 [2024-12-12 05:51:20.677138] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:13.531 162.25 IOPS, 486.75 MiB/s [2024-12-12T05:51:21.053Z] [2024-12-12 05:51:20.889626] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:13.531 [2024-12-12 05:51:20.889921] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:13.790 [2024-12-12 05:51:21.215114] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.790 "name": "raid_bdev1", 00:13:13.790 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:13.790 "strip_size_kb": 0, 00:13:13.790 "state": "online", 00:13:13.790 "raid_level": "raid1", 00:13:13.790 "superblock": false, 00:13:13.790 "num_base_bdevs": 2, 00:13:13.790 "num_base_bdevs_discovered": 2, 00:13:13.790 "num_base_bdevs_operational": 2, 00:13:13.790 "process": { 00:13:13.790 "type": "rebuild", 00:13:13.790 "target": "spare", 00:13:13.790 "progress": { 00:13:13.790 "blocks": 30720, 00:13:13.790 "percent": 46 00:13:13.790 } 00:13:13.790 }, 00:13:13.790 "base_bdevs_list": [ 00:13:13.790 { 00:13:13.790 "name": "spare", 00:13:13.790 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:13.790 "is_configured": true, 00:13:13.790 "data_offset": 0, 00:13:13.790 "data_size": 65536 00:13:13.790 }, 00:13:13.790 { 00:13:13.790 "name": "BaseBdev2", 00:13:13.790 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:13.790 "is_configured": true, 00:13:13.790 "data_offset": 0, 00:13:13.790 "data_size": 65536 00:13:13.790 } 00:13:13.790 ] 00:13:13.790 }' 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.790 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:14.049 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:14.049 05:51:21 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:14.308 136.60 IOPS, 409.80 MiB/s [2024-12-12T05:51:21.830Z] [2024-12-12 05:51:21.755904] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:14.568 [2024-12-12 05:51:21.994115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.137 "name": "raid_bdev1", 00:13:15.137 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:15.137 "strip_size_kb": 0, 00:13:15.137 "state": "online", 00:13:15.137 "raid_level": "raid1", 00:13:15.137 "superblock": false, 00:13:15.137 "num_base_bdevs": 2, 00:13:15.137 "num_base_bdevs_discovered": 2, 00:13:15.137 "num_base_bdevs_operational": 2, 00:13:15.137 "process": { 00:13:15.137 "type": "rebuild", 00:13:15.137 "target": "spare", 00:13:15.137 "progress": { 00:13:15.137 "blocks": 49152, 00:13:15.137 "percent": 75 00:13:15.137 } 00:13:15.137 }, 00:13:15.137 "base_bdevs_list": [ 00:13:15.137 { 00:13:15.137 "name": "spare", 00:13:15.137 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:15.137 "is_configured": true, 00:13:15.137 "data_offset": 0, 00:13:15.137 "data_size": 65536 00:13:15.137 }, 00:13:15.137 { 00:13:15.137 "name": "BaseBdev2", 00:13:15.137 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:15.137 "is_configured": true, 00:13:15.137 "data_offset": 0, 00:13:15.137 "data_size": 65536 00:13:15.137 } 00:13:15.137 ] 00:13:15.137 }' 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.137 05:51:22 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:15.137 [2024-12-12 05:51:22.516598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:13:15.397 120.67 IOPS, 362.00 MiB/s [2024-12-12T05:51:22.919Z] [2024-12-12 05:51:22.831887] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:15.965 [2024-12-12 05:51:23.371362] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:15.965 [2024-12-12 05:51:23.476569] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:15.965 [2024-12-12 05:51:23.478314] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.225 "name": "raid_bdev1", 00:13:16.225 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:16.225 "strip_size_kb": 0, 00:13:16.225 "state": "online", 00:13:16.225 "raid_level": "raid1", 00:13:16.225 "superblock": false, 00:13:16.225 "num_base_bdevs": 2, 00:13:16.225 "num_base_bdevs_discovered": 2, 00:13:16.225 "num_base_bdevs_operational": 2, 00:13:16.225 "base_bdevs_list": [ 00:13:16.225 { 00:13:16.225 "name": "spare", 00:13:16.225 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:16.225 "is_configured": true, 00:13:16.225 "data_offset": 0, 00:13:16.225 "data_size": 65536 00:13:16.225 }, 00:13:16.225 { 00:13:16.225 "name": "BaseBdev2", 00:13:16.225 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:16.225 "is_configured": true, 00:13:16.225 "data_offset": 0, 00:13:16.225 "data_size": 65536 00:13:16.225 } 00:13:16.225 ] 00:13:16.225 }' 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:16.225 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.226 "name": "raid_bdev1", 00:13:16.226 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:16.226 "strip_size_kb": 0, 00:13:16.226 "state": "online", 00:13:16.226 "raid_level": "raid1", 00:13:16.226 "superblock": false, 00:13:16.226 "num_base_bdevs": 2, 00:13:16.226 "num_base_bdevs_discovered": 2, 00:13:16.226 "num_base_bdevs_operational": 2, 00:13:16.226 "base_bdevs_list": [ 00:13:16.226 { 00:13:16.226 "name": "spare", 00:13:16.226 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:16.226 "is_configured": true, 00:13:16.226 "data_offset": 0, 00:13:16.226 "data_size": 65536 00:13:16.226 }, 00:13:16.226 { 00:13:16.226 "name": "BaseBdev2", 00:13:16.226 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:16.226 "is_configured": true, 00:13:16.226 "data_offset": 0, 00:13:16.226 "data_size": 65536 00:13:16.226 } 00:13:16.226 ] 00:13:16.226 }' 00:13:16.226 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.486 107.57 IOPS, 322.71 MiB/s [2024-12-12T05:51:24.008Z] 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.486 "name": "raid_bdev1", 00:13:16.486 "uuid": "2bf8a6af-8332-4332-b09d-a44f1505dbca", 00:13:16.486 "strip_size_kb": 0, 00:13:16.486 "state": "online", 00:13:16.486 "raid_level": "raid1", 00:13:16.486 "superblock": false, 00:13:16.486 "num_base_bdevs": 2, 00:13:16.486 "num_base_bdevs_discovered": 2, 00:13:16.486 "num_base_bdevs_operational": 2, 00:13:16.486 "base_bdevs_list": [ 00:13:16.486 { 00:13:16.486 "name": "spare", 00:13:16.486 "uuid": "434786ab-cd2e-5315-88f0-53ed23e68d9b", 00:13:16.486 "is_configured": true, 00:13:16.486 "data_offset": 0, 00:13:16.486 "data_size": 65536 00:13:16.486 }, 00:13:16.486 { 00:13:16.486 "name": "BaseBdev2", 00:13:16.486 "uuid": "ee2579ff-03e8-54b1-b789-f55718b9ab54", 00:13:16.486 "is_configured": true, 00:13:16.486 "data_offset": 0, 00:13:16.486 "data_size": 65536 00:13:16.486 } 00:13:16.486 ] 00:13:16.486 }' 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.486 05:51:23 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:16.746 [2024-12-12 05:51:24.195234] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:16.746 [2024-12-12 05:51:24.195267] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:16.746 00:13:16.746 Latency(us) 00:13:16.746 [2024-12-12T05:51:24.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.746 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:16.746 raid_bdev1 : 7.49 102.29 306.87 0.00 0.00 13115.31 302.28 107147.07 00:13:16.746 [2024-12-12T05:51:24.268Z] =================================================================================================================== 00:13:16.746 [2024-12-12T05:51:24.268Z] Total : 102.29 306.87 0.00 0.00 13115.31 302.28 107147.07 00:13:16.746 [2024-12-12 05:51:24.247460] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:16.746 [2024-12-12 05:51:24.247532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:16.746 [2024-12-12 05:51:24.247607] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:16.746 [2024-12-12 05:51:24.247619] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:16.746 { 00:13:16.746 "results": [ 00:13:16.746 { 00:13:16.746 "job": "raid_bdev1", 00:13:16.746 "core_mask": "0x1", 00:13:16.746 "workload": "randrw", 00:13:16.746 "percentage": 50, 00:13:16.746 "status": "finished", 00:13:16.746 "queue_depth": 2, 00:13:16.746 "io_size": 3145728, 00:13:16.746 "runtime": 7.488619, 00:13:16.746 "iops": 102.28855280259285, 00:13:16.746 "mibps": 306.86565840777854, 00:13:16.746 "io_failed": 0, 00:13:16.746 "io_timeout": 0, 00:13:16.746 "avg_latency_us": 13115.309619528658, 00:13:16.746 "min_latency_us": 302.2812227074236, 00:13:16.746 "max_latency_us": 107147.0672489083 00:13:16.746 } 00:13:16.746 ], 00:13:16.746 "core_count": 1 00:13:16.746 } 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.746 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.006 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:17.006 /dev/nbd0 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.266 1+0 records in 00:13:17.266 1+0 records out 00:13:17.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347037 s, 11.8 MB/s 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:17.266 /dev/nbd1 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:17.266 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:17.527 1+0 records in 00:13:17.527 1+0 records out 00:13:17.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035722 s, 11.5 MB/s 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.527 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:17.528 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.528 05:51:24 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:17.789 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 77347 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 77347 ']' 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 77347 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77347 00:13:18.048 killing process with pid 77347 00:13:18.048 Received shutdown signal, test time was about 8.721840 seconds 00:13:18.048 00:13:18.048 Latency(us) 00:13:18.048 [2024-12-12T05:51:25.570Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.048 [2024-12-12T05:51:25.570Z] =================================================================================================================== 00:13:18.048 [2024-12-12T05:51:25.570Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77347' 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 77347 00:13:18.048 [2024-12-12 05:51:25.457927] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:18.048 05:51:25 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 77347 00:13:18.308 [2024-12-12 05:51:25.680019] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:19.687 00:13:19.687 real 0m11.821s 00:13:19.687 user 0m14.913s 00:13:19.687 sys 0m1.386s 00:13:19.687 ************************************ 00:13:19.687 END TEST raid_rebuild_test_io 00:13:19.687 ************************************ 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.687 05:51:26 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:19.687 05:51:26 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:19.687 05:51:26 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.687 05:51:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.687 ************************************ 00:13:19.687 START TEST raid_rebuild_test_sb_io 00:13:19.687 ************************************ 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.687 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77713 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77713 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 77713 ']' 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.688 05:51:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:19.688 [2024-12-12 05:51:26.993776] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:13:19.688 [2024-12-12 05:51:26.993980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:19.688 Zero copy mechanism will not be used. 00:13:19.688 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77713 ] 00:13:19.688 [2024-12-12 05:51:27.165106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.947 [2024-12-12 05:51:27.274699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.947 [2024-12-12 05:51:27.465415] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.947 [2024-12-12 05:51:27.465567] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.517 BaseBdev1_malloc 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.517 [2024-12-12 05:51:27.844263] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:20.517 [2024-12-12 05:51:27.844321] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.517 [2024-12-12 05:51:27.844359] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:20.517 [2024-12-12 05:51:27.844370] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.517 [2024-12-12 05:51:27.846403] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.517 [2024-12-12 05:51:27.846496] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:20.517 BaseBdev1 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.517 BaseBdev2_malloc 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.517 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.517 [2024-12-12 05:51:27.896556] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:20.517 [2024-12-12 05:51:27.896611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.518 [2024-12-12 05:51:27.896629] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:20.518 [2024-12-12 05:51:27.896640] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.518 [2024-12-12 05:51:27.898662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.518 [2024-12-12 05:51:27.898704] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:20.518 BaseBdev2 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.518 spare_malloc 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.518 spare_delay 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.518 [2024-12-12 05:51:27.975302] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:20.518 [2024-12-12 05:51:27.975413] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.518 [2024-12-12 05:51:27.975438] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:20.518 [2024-12-12 05:51:27.975450] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.518 [2024-12-12 05:51:27.977636] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.518 [2024-12-12 05:51:27.977674] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:20.518 spare 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.518 [2024-12-12 05:51:27.987336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.518 [2024-12-12 05:51:27.989047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:20.518 [2024-12-12 05:51:27.989210] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:20.518 [2024-12-12 05:51:27.989226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:20.518 [2024-12-12 05:51:27.989460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:20.518 [2024-12-12 05:51:27.989640] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:20.518 [2024-12-12 05:51:27.989650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:20.518 [2024-12-12 05:51:27.989807] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.518 05:51:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:20.518 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.778 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.778 "name": "raid_bdev1", 00:13:20.778 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:20.778 "strip_size_kb": 0, 00:13:20.778 "state": "online", 00:13:20.778 "raid_level": "raid1", 00:13:20.778 "superblock": true, 00:13:20.778 "num_base_bdevs": 2, 00:13:20.778 "num_base_bdevs_discovered": 2, 00:13:20.778 "num_base_bdevs_operational": 2, 00:13:20.778 "base_bdevs_list": [ 00:13:20.778 { 00:13:20.778 "name": "BaseBdev1", 00:13:20.778 "uuid": "e76867fa-ad94-54a3-9365-81a67d21709e", 00:13:20.778 "is_configured": true, 00:13:20.778 "data_offset": 2048, 00:13:20.778 "data_size": 63488 00:13:20.778 }, 00:13:20.778 { 00:13:20.778 "name": "BaseBdev2", 00:13:20.778 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:20.778 "is_configured": true, 00:13:20.778 "data_offset": 2048, 00:13:20.778 "data_size": 63488 00:13:20.778 } 00:13:20.778 ] 00:13:20.778 }' 00:13:20.778 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.778 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 [2024-12-12 05:51:28.450879] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 [2024-12-12 05:51:28.530476] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.038 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.298 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.298 "name": "raid_bdev1", 00:13:21.298 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:21.298 "strip_size_kb": 0, 00:13:21.298 "state": "online", 00:13:21.298 "raid_level": "raid1", 00:13:21.298 "superblock": true, 00:13:21.298 "num_base_bdevs": 2, 00:13:21.298 "num_base_bdevs_discovered": 1, 00:13:21.298 "num_base_bdevs_operational": 1, 00:13:21.298 "base_bdevs_list": [ 00:13:21.298 { 00:13:21.298 "name": null, 00:13:21.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.298 "is_configured": false, 00:13:21.298 "data_offset": 0, 00:13:21.298 "data_size": 63488 00:13:21.298 }, 00:13:21.298 { 00:13:21.298 "name": "BaseBdev2", 00:13:21.298 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:21.298 "is_configured": true, 00:13:21.298 "data_offset": 2048, 00:13:21.298 "data_size": 63488 00:13:21.298 } 00:13:21.298 ] 00:13:21.298 }' 00:13:21.298 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.298 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.298 [2024-12-12 05:51:28.606405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:21.298 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.298 Zero copy mechanism will not be used. 00:13:21.298 Running I/O for 60 seconds... 00:13:21.558 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:21.558 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.558 05:51:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:21.558 [2024-12-12 05:51:28.962912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:21.558 05:51:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.558 05:51:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:21.558 [2024-12-12 05:51:29.016109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:21.558 [2024-12-12 05:51:29.018039] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:21.818 [2024-12-12 05:51:29.125775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.818 [2024-12-12 05:51:29.126226] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:21.818 [2024-12-12 05:51:29.240772] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:21.818 [2024-12-12 05:51:29.240965] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:22.077 [2024-12-12 05:51:29.566582] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:22.336 196.00 IOPS, 588.00 MiB/s [2024-12-12T05:51:29.858Z] [2024-12-12 05:51:29.694932] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.596 [2024-12-12 05:51:30.028486] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:22.596 "name": "raid_bdev1", 00:13:22.596 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:22.596 "strip_size_kb": 0, 00:13:22.596 "state": "online", 00:13:22.596 "raid_level": "raid1", 00:13:22.596 "superblock": true, 00:13:22.596 "num_base_bdevs": 2, 00:13:22.596 "num_base_bdevs_discovered": 2, 00:13:22.596 "num_base_bdevs_operational": 2, 00:13:22.596 "process": { 00:13:22.596 "type": "rebuild", 00:13:22.596 "target": "spare", 00:13:22.596 "progress": { 00:13:22.596 "blocks": 12288, 00:13:22.596 "percent": 19 00:13:22.596 } 00:13:22.596 }, 00:13:22.596 "base_bdevs_list": [ 00:13:22.596 { 00:13:22.596 "name": "spare", 00:13:22.596 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:22.596 "is_configured": true, 00:13:22.596 "data_offset": 2048, 00:13:22.596 "data_size": 63488 00:13:22.596 }, 00:13:22.596 { 00:13:22.596 "name": "BaseBdev2", 00:13:22.596 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:22.596 "is_configured": true, 00:13:22.596 "data_offset": 2048, 00:13:22.596 "data_size": 63488 00:13:22.596 } 00:13:22.596 ] 00:13:22.596 }' 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:22.596 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:22.855 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:22.855 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:22.855 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.855 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:22.855 [2024-12-12 05:51:30.165605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.855 [2024-12-12 05:51:30.350923] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:22.855 [2024-12-12 05:51:30.357972] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.855 [2024-12-12 05:51:30.358012] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:22.855 [2024-12-12 05:51:30.358026] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:23.114 [2024-12-12 05:51:30.407017] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.114 "name": "raid_bdev1", 00:13:23.114 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:23.114 "strip_size_kb": 0, 00:13:23.114 "state": "online", 00:13:23.114 "raid_level": "raid1", 00:13:23.114 "superblock": true, 00:13:23.114 "num_base_bdevs": 2, 00:13:23.114 "num_base_bdevs_discovered": 1, 00:13:23.114 "num_base_bdevs_operational": 1, 00:13:23.114 "base_bdevs_list": [ 00:13:23.114 { 00:13:23.114 "name": null, 00:13:23.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.114 "is_configured": false, 00:13:23.114 "data_offset": 0, 00:13:23.114 "data_size": 63488 00:13:23.114 }, 00:13:23.114 { 00:13:23.114 "name": "BaseBdev2", 00:13:23.114 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:23.114 "is_configured": true, 00:13:23.114 "data_offset": 2048, 00:13:23.114 "data_size": 63488 00:13:23.114 } 00:13:23.114 ] 00:13:23.114 }' 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.114 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.372 162.50 IOPS, 487.50 MiB/s [2024-12-12T05:51:30.894Z] 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:23.372 "name": "raid_bdev1", 00:13:23.372 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:23.372 "strip_size_kb": 0, 00:13:23.372 "state": "online", 00:13:23.372 "raid_level": "raid1", 00:13:23.372 "superblock": true, 00:13:23.372 "num_base_bdevs": 2, 00:13:23.372 "num_base_bdevs_discovered": 1, 00:13:23.372 "num_base_bdevs_operational": 1, 00:13:23.372 "base_bdevs_list": [ 00:13:23.372 { 00:13:23.372 "name": null, 00:13:23.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.372 "is_configured": false, 00:13:23.372 "data_offset": 0, 00:13:23.372 "data_size": 63488 00:13:23.372 }, 00:13:23.372 { 00:13:23.372 "name": "BaseBdev2", 00:13:23.372 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:23.372 "is_configured": true, 00:13:23.372 "data_offset": 2048, 00:13:23.372 "data_size": 63488 00:13:23.372 } 00:13:23.372 ] 00:13:23.372 }' 00:13:23.372 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.631 05:51:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:23.631 [2024-12-12 05:51:30.977308] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:23.631 05:51:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.631 05:51:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:23.631 [2024-12-12 05:51:31.034903] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:23.631 [2024-12-12 05:51:31.036744] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:23.891 [2024-12-12 05:51:31.153704] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.891 [2024-12-12 05:51:31.154140] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:23.891 [2024-12-12 05:51:31.372284] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:23.891 [2024-12-12 05:51:31.372496] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:24.409 166.33 IOPS, 499.00 MiB/s [2024-12-12T05:51:31.931Z] [2024-12-12 05:51:31.716837] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:24.409 [2024-12-12 05:51:31.924301] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.409 [2024-12-12 05:51:31.924646] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.667 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.668 "name": "raid_bdev1", 00:13:24.668 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:24.668 "strip_size_kb": 0, 00:13:24.668 "state": "online", 00:13:24.668 "raid_level": "raid1", 00:13:24.668 "superblock": true, 00:13:24.668 "num_base_bdevs": 2, 00:13:24.668 "num_base_bdevs_discovered": 2, 00:13:24.668 "num_base_bdevs_operational": 2, 00:13:24.668 "process": { 00:13:24.668 "type": "rebuild", 00:13:24.668 "target": "spare", 00:13:24.668 "progress": { 00:13:24.668 "blocks": 10240, 00:13:24.668 "percent": 16 00:13:24.668 } 00:13:24.668 }, 00:13:24.668 "base_bdevs_list": [ 00:13:24.668 { 00:13:24.668 "name": "spare", 00:13:24.668 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:24.668 "is_configured": true, 00:13:24.668 "data_offset": 2048, 00:13:24.668 "data_size": 63488 00:13:24.668 }, 00:13:24.668 { 00:13:24.668 "name": "BaseBdev2", 00:13:24.668 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:24.668 "is_configured": true, 00:13:24.668 "data_offset": 2048, 00:13:24.668 "data_size": 63488 00:13:24.668 } 00:13:24.668 ] 00:13:24.668 }' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:24.668 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=406 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.668 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:24.927 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.927 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:24.927 "name": "raid_bdev1", 00:13:24.927 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:24.927 "strip_size_kb": 0, 00:13:24.927 "state": "online", 00:13:24.928 "raid_level": "raid1", 00:13:24.928 "superblock": true, 00:13:24.928 "num_base_bdevs": 2, 00:13:24.928 "num_base_bdevs_discovered": 2, 00:13:24.928 "num_base_bdevs_operational": 2, 00:13:24.928 "process": { 00:13:24.928 "type": "rebuild", 00:13:24.928 "target": "spare", 00:13:24.928 "progress": { 00:13:24.928 "blocks": 12288, 00:13:24.928 "percent": 19 00:13:24.928 } 00:13:24.928 }, 00:13:24.928 "base_bdevs_list": [ 00:13:24.928 { 00:13:24.928 "name": "spare", 00:13:24.928 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:24.928 "is_configured": true, 00:13:24.928 "data_offset": 2048, 00:13:24.928 "data_size": 63488 00:13:24.928 }, 00:13:24.928 { 00:13:24.928 "name": "BaseBdev2", 00:13:24.928 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:24.928 "is_configured": true, 00:13:24.928 "data_offset": 2048, 00:13:24.928 "data_size": 63488 00:13:24.928 } 00:13:24.928 ] 00:13:24.928 }' 00:13:24.928 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:24.928 [2024-12-12 05:51:32.249160] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:24.928 [2024-12-12 05:51:32.249812] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:24.928 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:24.928 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:24.928 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:24.928 05:51:32 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:25.187 [2024-12-12 05:51:32.451294] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.187 [2024-12-12 05:51:32.451709] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:25.187 146.50 IOPS, 439.50 MiB/s [2024-12-12T05:51:32.709Z] [2024-12-12 05:51:32.684680] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.187 [2024-12-12 05:51:32.685238] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:25.446 [2024-12-12 05:51:32.893767] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:25.446 [2024-12-12 05:51:32.894136] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.015 [2024-12-12 05:51:33.342852] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:26.015 "name": "raid_bdev1", 00:13:26.015 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:26.015 "strip_size_kb": 0, 00:13:26.015 "state": "online", 00:13:26.015 "raid_level": "raid1", 00:13:26.015 "superblock": true, 00:13:26.015 "num_base_bdevs": 2, 00:13:26.015 "num_base_bdevs_discovered": 2, 00:13:26.015 "num_base_bdevs_operational": 2, 00:13:26.015 "process": { 00:13:26.015 "type": "rebuild", 00:13:26.015 "target": "spare", 00:13:26.015 "progress": { 00:13:26.015 "blocks": 26624, 00:13:26.015 "percent": 41 00:13:26.015 } 00:13:26.015 }, 00:13:26.015 "base_bdevs_list": [ 00:13:26.015 { 00:13:26.015 "name": "spare", 00:13:26.015 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:26.015 "is_configured": true, 00:13:26.015 "data_offset": 2048, 00:13:26.015 "data_size": 63488 00:13:26.015 }, 00:13:26.015 { 00:13:26.015 "name": "BaseBdev2", 00:13:26.015 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:26.015 "is_configured": true, 00:13:26.015 "data_offset": 2048, 00:13:26.015 "data_size": 63488 00:13:26.015 } 00:13:26.015 ] 00:13:26.015 }' 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:26.015 05:51:33 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:26.275 [2024-12-12 05:51:33.584738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:26.879 128.00 IOPS, 384.00 MiB/s [2024-12-12T05:51:34.401Z] [2024-12-12 05:51:34.119998] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:27.139 [2024-12-12 05:51:34.439509] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:27.139 "name": "raid_bdev1", 00:13:27.139 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:27.139 "strip_size_kb": 0, 00:13:27.139 "state": "online", 00:13:27.139 "raid_level": "raid1", 00:13:27.139 "superblock": true, 00:13:27.139 "num_base_bdevs": 2, 00:13:27.139 "num_base_bdevs_discovered": 2, 00:13:27.139 "num_base_bdevs_operational": 2, 00:13:27.139 "process": { 00:13:27.139 "type": "rebuild", 00:13:27.139 "target": "spare", 00:13:27.139 "progress": { 00:13:27.139 "blocks": 47104, 00:13:27.139 "percent": 74 00:13:27.139 } 00:13:27.139 }, 00:13:27.139 "base_bdevs_list": [ 00:13:27.139 { 00:13:27.139 "name": "spare", 00:13:27.139 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:27.139 "is_configured": true, 00:13:27.139 "data_offset": 2048, 00:13:27.139 "data_size": 63488 00:13:27.139 }, 00:13:27.139 { 00:13:27.139 "name": "BaseBdev2", 00:13:27.139 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:27.139 "is_configured": true, 00:13:27.139 "data_offset": 2048, 00:13:27.139 "data_size": 63488 00:13:27.139 } 00:13:27.139 ] 00:13:27.139 }' 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:27.139 05:51:34 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:27.399 114.67 IOPS, 344.00 MiB/s [2024-12-12T05:51:34.921Z] [2024-12-12 05:51:34.660792] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:13:27.659 [2024-12-12 05:51:35.091607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:28.227 [2024-12-12 05:51:35.519546] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.227 103.29 IOPS, 309.86 MiB/s [2024-12-12T05:51:35.749Z] 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.227 "name": "raid_bdev1", 00:13:28.227 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:28.227 "strip_size_kb": 0, 00:13:28.227 "state": "online", 00:13:28.227 "raid_level": "raid1", 00:13:28.227 "superblock": true, 00:13:28.227 "num_base_bdevs": 2, 00:13:28.227 "num_base_bdevs_discovered": 2, 00:13:28.227 "num_base_bdevs_operational": 2, 00:13:28.227 "process": { 00:13:28.227 "type": "rebuild", 00:13:28.227 "target": "spare", 00:13:28.227 "progress": { 00:13:28.227 "blocks": 63488, 00:13:28.227 "percent": 100 00:13:28.227 } 00:13:28.227 }, 00:13:28.227 "base_bdevs_list": [ 00:13:28.227 { 00:13:28.227 "name": "spare", 00:13:28.227 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:28.227 "is_configured": true, 00:13:28.227 "data_offset": 2048, 00:13:28.227 "data_size": 63488 00:13:28.227 }, 00:13:28.227 { 00:13:28.227 "name": "BaseBdev2", 00:13:28.227 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:28.227 "is_configured": true, 00:13:28.227 "data_offset": 2048, 00:13:28.227 "data_size": 63488 00:13:28.227 } 00:13:28.227 ] 00:13:28.227 }' 00:13:28.227 [2024-12-12 05:51:35.619345] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.227 [2024-12-12 05:51:35.627046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.227 05:51:35 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:29.424 96.00 IOPS, 288.00 MiB/s [2024-12-12T05:51:36.946Z] 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.424 "name": "raid_bdev1", 00:13:29.424 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:29.424 "strip_size_kb": 0, 00:13:29.424 "state": "online", 00:13:29.424 "raid_level": "raid1", 00:13:29.424 "superblock": true, 00:13:29.424 "num_base_bdevs": 2, 00:13:29.424 "num_base_bdevs_discovered": 2, 00:13:29.424 "num_base_bdevs_operational": 2, 00:13:29.424 "base_bdevs_list": [ 00:13:29.424 { 00:13:29.424 "name": "spare", 00:13:29.424 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:29.424 "is_configured": true, 00:13:29.424 "data_offset": 2048, 00:13:29.424 "data_size": 63488 00:13:29.424 }, 00:13:29.424 { 00:13:29.424 "name": "BaseBdev2", 00:13:29.424 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:29.424 "is_configured": true, 00:13:29.424 "data_offset": 2048, 00:13:29.424 "data_size": 63488 00:13:29.424 } 00:13:29.424 ] 00:13:29.424 }' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.424 "name": "raid_bdev1", 00:13:29.424 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:29.424 "strip_size_kb": 0, 00:13:29.424 "state": "online", 00:13:29.424 "raid_level": "raid1", 00:13:29.424 "superblock": true, 00:13:29.424 "num_base_bdevs": 2, 00:13:29.424 "num_base_bdevs_discovered": 2, 00:13:29.424 "num_base_bdevs_operational": 2, 00:13:29.424 "base_bdevs_list": [ 00:13:29.424 { 00:13:29.424 "name": "spare", 00:13:29.424 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:29.424 "is_configured": true, 00:13:29.424 "data_offset": 2048, 00:13:29.424 "data_size": 63488 00:13:29.424 }, 00:13:29.424 { 00:13:29.424 "name": "BaseBdev2", 00:13:29.424 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:29.424 "is_configured": true, 00:13:29.424 "data_offset": 2048, 00:13:29.424 "data_size": 63488 00:13:29.424 } 00:13:29.424 ] 00:13:29.424 }' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.424 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.685 05:51:36 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.685 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.685 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.685 "name": "raid_bdev1", 00:13:29.685 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:29.685 "strip_size_kb": 0, 00:13:29.685 "state": "online", 00:13:29.685 "raid_level": "raid1", 00:13:29.685 "superblock": true, 00:13:29.685 "num_base_bdevs": 2, 00:13:29.685 "num_base_bdevs_discovered": 2, 00:13:29.685 "num_base_bdevs_operational": 2, 00:13:29.685 "base_bdevs_list": [ 00:13:29.685 { 00:13:29.685 "name": "spare", 00:13:29.685 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:29.685 "is_configured": true, 00:13:29.685 "data_offset": 2048, 00:13:29.685 "data_size": 63488 00:13:29.685 }, 00:13:29.685 { 00:13:29.685 "name": "BaseBdev2", 00:13:29.685 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:29.685 "is_configured": true, 00:13:29.685 "data_offset": 2048, 00:13:29.685 "data_size": 63488 00:13:29.685 } 00:13:29.685 ] 00:13:29.685 }' 00:13:29.685 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.685 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.945 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:29.945 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.945 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:29.945 [2024-12-12 05:51:37.382286] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.945 [2024-12-12 05:51:37.382314] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:29.945 00:13:29.945 Latency(us) 00:13:29.945 [2024-12-12T05:51:37.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.945 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:29.945 raid_bdev1 : 8.86 91.06 273.17 0.00 0.00 14916.97 304.07 112641.79 00:13:29.945 [2024-12-12T05:51:37.467Z] =================================================================================================================== 00:13:29.945 [2024-12-12T05:51:37.467Z] Total : 91.06 273.17 0.00 0.00 14916.97 304.07 112641.79 00:13:30.205 [2024-12-12 05:51:37.474016] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.205 [2024-12-12 05:51:37.474065] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.205 [2024-12-12 05:51:37.474133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.205 [2024-12-12 05:51:37.474142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:30.205 { 00:13:30.205 "results": [ 00:13:30.205 { 00:13:30.205 "job": "raid_bdev1", 00:13:30.205 "core_mask": "0x1", 00:13:30.205 "workload": "randrw", 00:13:30.205 "percentage": 50, 00:13:30.205 "status": "finished", 00:13:30.205 "queue_depth": 2, 00:13:30.205 "io_size": 3145728, 00:13:30.205 "runtime": 8.86273, 00:13:30.205 "iops": 91.05546485112374, 00:13:30.205 "mibps": 273.1663945533712, 00:13:30.205 "io_failed": 0, 00:13:30.205 "io_timeout": 0, 00:13:30.205 "avg_latency_us": 14916.966499461587, 00:13:30.205 "min_latency_us": 304.0698689956332, 00:13:30.205 "max_latency_us": 112641.78864628822 00:13:30.205 } 00:13:30.205 ], 00:13:30.205 "core_count": 1 00:13:30.205 } 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.205 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:30.205 /dev/nbd0 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.465 1+0 records in 00:13:30.465 1+0 records out 00:13:30.465 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00036084 s, 11.4 MB/s 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:30.465 /dev/nbd1 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:30.465 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:30.725 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:13:30.725 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:30.725 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:30.725 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.725 1+0 records in 00:13:30.725 1+0 records out 00:13:30.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519281 s, 7.9 MB/s 00:13:30.725 05:51:37 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.725 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:30.984 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:30.984 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.985 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.244 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.245 [2024-12-12 05:51:38.641007] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:31.245 [2024-12-12 05:51:38.641120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:31.245 [2024-12-12 05:51:38.641160] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:13:31.245 [2024-12-12 05:51:38.641192] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:31.245 [2024-12-12 05:51:38.643348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:31.245 [2024-12-12 05:51:38.643422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:31.245 [2024-12-12 05:51:38.643568] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:31.245 [2024-12-12 05:51:38.643657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:31.245 [2024-12-12 05:51:38.643856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:31.245 spare 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.245 [2024-12-12 05:51:38.743805] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:31.245 [2024-12-12 05:51:38.743843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:31.245 [2024-12-12 05:51:38.744107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:13:31.245 [2024-12-12 05:51:38.744264] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:31.245 [2024-12-12 05:51:38.744282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:31.245 [2024-12-12 05:51:38.744441] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.245 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.505 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.505 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:31.505 "name": "raid_bdev1", 00:13:31.505 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:31.505 "strip_size_kb": 0, 00:13:31.505 "state": "online", 00:13:31.505 "raid_level": "raid1", 00:13:31.505 "superblock": true, 00:13:31.505 "num_base_bdevs": 2, 00:13:31.505 "num_base_bdevs_discovered": 2, 00:13:31.505 "num_base_bdevs_operational": 2, 00:13:31.505 "base_bdevs_list": [ 00:13:31.505 { 00:13:31.505 "name": "spare", 00:13:31.505 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:31.505 "is_configured": true, 00:13:31.505 "data_offset": 2048, 00:13:31.505 "data_size": 63488 00:13:31.505 }, 00:13:31.505 { 00:13:31.505 "name": "BaseBdev2", 00:13:31.505 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:31.505 "is_configured": true, 00:13:31.505 "data_offset": 2048, 00:13:31.505 "data_size": 63488 00:13:31.505 } 00:13:31.505 ] 00:13:31.505 }' 00:13:31.505 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:31.505 05:51:38 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.765 "name": "raid_bdev1", 00:13:31.765 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:31.765 "strip_size_kb": 0, 00:13:31.765 "state": "online", 00:13:31.765 "raid_level": "raid1", 00:13:31.765 "superblock": true, 00:13:31.765 "num_base_bdevs": 2, 00:13:31.765 "num_base_bdevs_discovered": 2, 00:13:31.765 "num_base_bdevs_operational": 2, 00:13:31.765 "base_bdevs_list": [ 00:13:31.765 { 00:13:31.765 "name": "spare", 00:13:31.765 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:31.765 "is_configured": true, 00:13:31.765 "data_offset": 2048, 00:13:31.765 "data_size": 63488 00:13:31.765 }, 00:13:31.765 { 00:13:31.765 "name": "BaseBdev2", 00:13:31.765 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:31.765 "is_configured": true, 00:13:31.765 "data_offset": 2048, 00:13:31.765 "data_size": 63488 00:13:31.765 } 00:13:31.765 ] 00:13:31.765 }' 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:31.765 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.025 [2024-12-12 05:51:39.291956] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:32.025 "name": "raid_bdev1", 00:13:32.025 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:32.025 "strip_size_kb": 0, 00:13:32.025 "state": "online", 00:13:32.025 "raid_level": "raid1", 00:13:32.025 "superblock": true, 00:13:32.025 "num_base_bdevs": 2, 00:13:32.025 "num_base_bdevs_discovered": 1, 00:13:32.025 "num_base_bdevs_operational": 1, 00:13:32.025 "base_bdevs_list": [ 00:13:32.025 { 00:13:32.025 "name": null, 00:13:32.025 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:32.025 "is_configured": false, 00:13:32.025 "data_offset": 0, 00:13:32.025 "data_size": 63488 00:13:32.025 }, 00:13:32.025 { 00:13:32.025 "name": "BaseBdev2", 00:13:32.025 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:32.025 "is_configured": true, 00:13:32.025 "data_offset": 2048, 00:13:32.025 "data_size": 63488 00:13:32.025 } 00:13:32.025 ] 00:13:32.025 }' 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:32.025 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.285 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:32.285 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.285 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:32.285 [2024-12-12 05:51:39.711340] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.285 [2024-12-12 05:51:39.711595] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:32.285 [2024-12-12 05:51:39.711619] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:32.285 [2024-12-12 05:51:39.711656] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:32.285 [2024-12-12 05:51:39.727482] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:13:32.285 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.285 05:51:39 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:32.285 [2024-12-12 05:51:39.729310] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.224 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.225 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.225 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.225 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.484 "name": "raid_bdev1", 00:13:33.484 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:33.484 "strip_size_kb": 0, 00:13:33.484 "state": "online", 00:13:33.484 "raid_level": "raid1", 00:13:33.484 "superblock": true, 00:13:33.484 "num_base_bdevs": 2, 00:13:33.484 "num_base_bdevs_discovered": 2, 00:13:33.484 "num_base_bdevs_operational": 2, 00:13:33.484 "process": { 00:13:33.484 "type": "rebuild", 00:13:33.484 "target": "spare", 00:13:33.484 "progress": { 00:13:33.484 "blocks": 20480, 00:13:33.484 "percent": 32 00:13:33.484 } 00:13:33.484 }, 00:13:33.484 "base_bdevs_list": [ 00:13:33.484 { 00:13:33.484 "name": "spare", 00:13:33.484 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:33.484 "is_configured": true, 00:13:33.484 "data_offset": 2048, 00:13:33.484 "data_size": 63488 00:13:33.484 }, 00:13:33.484 { 00:13:33.484 "name": "BaseBdev2", 00:13:33.484 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:33.484 "is_configured": true, 00:13:33.484 "data_offset": 2048, 00:13:33.484 "data_size": 63488 00:13:33.484 } 00:13:33.484 ] 00:13:33.484 }' 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.484 [2024-12-12 05:51:40.885174] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.484 [2024-12-12 05:51:40.934050] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:33.484 [2024-12-12 05:51:40.934172] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.484 [2024-12-12 05:51:40.934206] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:33.484 [2024-12-12 05:51:40.934229] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:33.484 05:51:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.743 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.743 "name": "raid_bdev1", 00:13:33.743 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:33.743 "strip_size_kb": 0, 00:13:33.743 "state": "online", 00:13:33.743 "raid_level": "raid1", 00:13:33.743 "superblock": true, 00:13:33.743 "num_base_bdevs": 2, 00:13:33.743 "num_base_bdevs_discovered": 1, 00:13:33.743 "num_base_bdevs_operational": 1, 00:13:33.743 "base_bdevs_list": [ 00:13:33.743 { 00:13:33.743 "name": null, 00:13:33.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.743 "is_configured": false, 00:13:33.743 "data_offset": 0, 00:13:33.743 "data_size": 63488 00:13:33.743 }, 00:13:33.743 { 00:13:33.743 "name": "BaseBdev2", 00:13:33.743 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:33.743 "is_configured": true, 00:13:33.743 "data_offset": 2048, 00:13:33.743 "data_size": 63488 00:13:33.743 } 00:13:33.743 ] 00:13:33.743 }' 00:13:33.743 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.743 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:34.002 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.002 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.002 [2024-12-12 05:51:41.385913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:34.002 [2024-12-12 05:51:41.386025] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:34.002 [2024-12-12 05:51:41.386077] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:34.002 [2024-12-12 05:51:41.386114] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:34.002 [2024-12-12 05:51:41.386628] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:34.002 [2024-12-12 05:51:41.386697] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:34.002 [2024-12-12 05:51:41.386821] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:34.002 [2024-12-12 05:51:41.386871] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:34.002 [2024-12-12 05:51:41.386925] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:34.002 [2024-12-12 05:51:41.386977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:34.002 [2024-12-12 05:51:41.402108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:13:34.002 spare 00:13:34.002 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.002 [2024-12-12 05:51:41.403975] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:34.002 05:51:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:34.939 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:34.939 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:34.940 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.199 "name": "raid_bdev1", 00:13:35.199 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:35.199 "strip_size_kb": 0, 00:13:35.199 "state": "online", 00:13:35.199 "raid_level": "raid1", 00:13:35.199 "superblock": true, 00:13:35.199 "num_base_bdevs": 2, 00:13:35.199 "num_base_bdevs_discovered": 2, 00:13:35.199 "num_base_bdevs_operational": 2, 00:13:35.199 "process": { 00:13:35.199 "type": "rebuild", 00:13:35.199 "target": "spare", 00:13:35.199 "progress": { 00:13:35.199 "blocks": 20480, 00:13:35.199 "percent": 32 00:13:35.199 } 00:13:35.199 }, 00:13:35.199 "base_bdevs_list": [ 00:13:35.199 { 00:13:35.199 "name": "spare", 00:13:35.199 "uuid": "629ea382-23f7-5105-a037-b7fefdcad59d", 00:13:35.199 "is_configured": true, 00:13:35.199 "data_offset": 2048, 00:13:35.199 "data_size": 63488 00:13:35.199 }, 00:13:35.199 { 00:13:35.199 "name": "BaseBdev2", 00:13:35.199 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:35.199 "is_configured": true, 00:13:35.199 "data_offset": 2048, 00:13:35.199 "data_size": 63488 00:13:35.199 } 00:13:35.199 ] 00:13:35.199 }' 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.199 [2024-12-12 05:51:42.567954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.199 [2024-12-12 05:51:42.608807] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:35.199 [2024-12-12 05:51:42.608859] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.199 [2024-12-12 05:51:42.608874] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.199 [2024-12-12 05:51:42.608881] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.199 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.200 "name": "raid_bdev1", 00:13:35.200 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:35.200 "strip_size_kb": 0, 00:13:35.200 "state": "online", 00:13:35.200 "raid_level": "raid1", 00:13:35.200 "superblock": true, 00:13:35.200 "num_base_bdevs": 2, 00:13:35.200 "num_base_bdevs_discovered": 1, 00:13:35.200 "num_base_bdevs_operational": 1, 00:13:35.200 "base_bdevs_list": [ 00:13:35.200 { 00:13:35.200 "name": null, 00:13:35.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.200 "is_configured": false, 00:13:35.200 "data_offset": 0, 00:13:35.200 "data_size": 63488 00:13:35.200 }, 00:13:35.200 { 00:13:35.200 "name": "BaseBdev2", 00:13:35.200 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:35.200 "is_configured": true, 00:13:35.200 "data_offset": 2048, 00:13:35.200 "data_size": 63488 00:13:35.200 } 00:13:35.200 ] 00:13:35.200 }' 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.200 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.769 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.769 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.769 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.769 05:51:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.769 "name": "raid_bdev1", 00:13:35.769 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:35.769 "strip_size_kb": 0, 00:13:35.769 "state": "online", 00:13:35.769 "raid_level": "raid1", 00:13:35.769 "superblock": true, 00:13:35.769 "num_base_bdevs": 2, 00:13:35.769 "num_base_bdevs_discovered": 1, 00:13:35.769 "num_base_bdevs_operational": 1, 00:13:35.769 "base_bdevs_list": [ 00:13:35.769 { 00:13:35.769 "name": null, 00:13:35.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.769 "is_configured": false, 00:13:35.769 "data_offset": 0, 00:13:35.769 "data_size": 63488 00:13:35.769 }, 00:13:35.769 { 00:13:35.769 "name": "BaseBdev2", 00:13:35.769 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:35.769 "is_configured": true, 00:13:35.769 "data_offset": 2048, 00:13:35.769 "data_size": 63488 00:13:35.769 } 00:13:35.769 ] 00:13:35.769 }' 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:35.769 [2024-12-12 05:51:43.130398] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:35.769 [2024-12-12 05:51:43.130519] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.769 [2024-12-12 05:51:43.130560] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:35.769 [2024-12-12 05:51:43.130593] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.769 [2024-12-12 05:51:43.131061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.769 [2024-12-12 05:51:43.131116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:35.769 [2024-12-12 05:51:43.131238] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:35.769 [2024-12-12 05:51:43.131280] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:35.769 [2024-12-12 05:51:43.131343] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:35.769 [2024-12-12 05:51:43.131377] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:35.769 BaseBdev1 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.769 05:51:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.710 "name": "raid_bdev1", 00:13:36.710 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:36.710 "strip_size_kb": 0, 00:13:36.710 "state": "online", 00:13:36.710 "raid_level": "raid1", 00:13:36.710 "superblock": true, 00:13:36.710 "num_base_bdevs": 2, 00:13:36.710 "num_base_bdevs_discovered": 1, 00:13:36.710 "num_base_bdevs_operational": 1, 00:13:36.710 "base_bdevs_list": [ 00:13:36.710 { 00:13:36.710 "name": null, 00:13:36.710 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.710 "is_configured": false, 00:13:36.710 "data_offset": 0, 00:13:36.710 "data_size": 63488 00:13:36.710 }, 00:13:36.710 { 00:13:36.710 "name": "BaseBdev2", 00:13:36.710 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:36.710 "is_configured": true, 00:13:36.710 "data_offset": 2048, 00:13:36.710 "data_size": 63488 00:13:36.710 } 00:13:36.710 ] 00:13:36.710 }' 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.710 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.280 "name": "raid_bdev1", 00:13:37.280 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:37.280 "strip_size_kb": 0, 00:13:37.280 "state": "online", 00:13:37.280 "raid_level": "raid1", 00:13:37.280 "superblock": true, 00:13:37.280 "num_base_bdevs": 2, 00:13:37.280 "num_base_bdevs_discovered": 1, 00:13:37.280 "num_base_bdevs_operational": 1, 00:13:37.280 "base_bdevs_list": [ 00:13:37.280 { 00:13:37.280 "name": null, 00:13:37.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.280 "is_configured": false, 00:13:37.280 "data_offset": 0, 00:13:37.280 "data_size": 63488 00:13:37.280 }, 00:13:37.280 { 00:13:37.280 "name": "BaseBdev2", 00:13:37.280 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:37.280 "is_configured": true, 00:13:37.280 "data_offset": 2048, 00:13:37.280 "data_size": 63488 00:13:37.280 } 00:13:37.280 ] 00:13:37.280 }' 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:37.280 [2024-12-12 05:51:44.647925] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:37.280 [2024-12-12 05:51:44.648073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.280 [2024-12-12 05:51:44.648093] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:37.280 request: 00:13:37.280 { 00:13:37.280 "base_bdev": "BaseBdev1", 00:13:37.280 "raid_bdev": "raid_bdev1", 00:13:37.280 "method": "bdev_raid_add_base_bdev", 00:13:37.280 "req_id": 1 00:13:37.280 } 00:13:37.280 Got JSON-RPC error response 00:13:37.280 response: 00:13:37.280 { 00:13:37.280 "code": -22, 00:13:37.280 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:37.280 } 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:37.280 05:51:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.219 "name": "raid_bdev1", 00:13:38.219 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:38.219 "strip_size_kb": 0, 00:13:38.219 "state": "online", 00:13:38.219 "raid_level": "raid1", 00:13:38.219 "superblock": true, 00:13:38.219 "num_base_bdevs": 2, 00:13:38.219 "num_base_bdevs_discovered": 1, 00:13:38.219 "num_base_bdevs_operational": 1, 00:13:38.219 "base_bdevs_list": [ 00:13:38.219 { 00:13:38.219 "name": null, 00:13:38.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.219 "is_configured": false, 00:13:38.219 "data_offset": 0, 00:13:38.219 "data_size": 63488 00:13:38.219 }, 00:13:38.219 { 00:13:38.219 "name": "BaseBdev2", 00:13:38.219 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:38.219 "is_configured": true, 00:13:38.219 "data_offset": 2048, 00:13:38.219 "data_size": 63488 00:13:38.219 } 00:13:38.219 ] 00:13:38.219 }' 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.219 05:51:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.789 "name": "raid_bdev1", 00:13:38.789 "uuid": "f179423f-a468-4b70-8387-c54360fc3f03", 00:13:38.789 "strip_size_kb": 0, 00:13:38.789 "state": "online", 00:13:38.789 "raid_level": "raid1", 00:13:38.789 "superblock": true, 00:13:38.789 "num_base_bdevs": 2, 00:13:38.789 "num_base_bdevs_discovered": 1, 00:13:38.789 "num_base_bdevs_operational": 1, 00:13:38.789 "base_bdevs_list": [ 00:13:38.789 { 00:13:38.789 "name": null, 00:13:38.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.789 "is_configured": false, 00:13:38.789 "data_offset": 0, 00:13:38.789 "data_size": 63488 00:13:38.789 }, 00:13:38.789 { 00:13:38.789 "name": "BaseBdev2", 00:13:38.789 "uuid": "e7fd67cf-c275-5564-b3e9-6d2b6e6cd79c", 00:13:38.789 "is_configured": true, 00:13:38.789 "data_offset": 2048, 00:13:38.789 "data_size": 63488 00:13:38.789 } 00:13:38.789 ] 00:13:38.789 }' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77713 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 77713 ']' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 77713 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77713 00:13:38.789 killing process with pid 77713 00:13:38.789 Received shutdown signal, test time was about 17.670024 seconds 00:13:38.789 00:13:38.789 Latency(us) 00:13:38.789 [2024-12-12T05:51:46.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.789 [2024-12-12T05:51:46.311Z] =================================================================================================================== 00:13:38.789 [2024-12-12T05:51:46.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77713' 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 77713 00:13:38.789 [2024-12-12 05:51:46.244418] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.789 [2024-12-12 05:51:46.244552] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.789 05:51:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 77713 00:13:38.789 [2024-12-12 05:51:46.244604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.789 [2024-12-12 05:51:46.244618] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:39.049 [2024-12-12 05:51:46.456440] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:40.430 ************************************ 00:13:40.430 END TEST raid_rebuild_test_sb_io 00:13:40.430 ************************************ 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:40.430 00:13:40.430 real 0m20.668s 00:13:40.430 user 0m26.565s 00:13:40.430 sys 0m2.056s 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:40.430 05:51:47 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:40.430 05:51:47 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:13:40.430 05:51:47 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:40.430 05:51:47 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:40.430 05:51:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:40.430 ************************************ 00:13:40.430 START TEST raid_rebuild_test 00:13:40.430 ************************************ 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:40.430 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=78417 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 78417 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 78417 ']' 00:13:40.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:40.431 05:51:47 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:40.431 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:40.431 Zero copy mechanism will not be used. 00:13:40.431 [2024-12-12 05:51:47.732105] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:13:40.431 [2024-12-12 05:51:47.732220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78417 ] 00:13:40.431 [2024-12-12 05:51:47.903228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.690 [2024-12-12 05:51:48.010111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.690 [2024-12-12 05:51:48.199599] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.690 [2024-12-12 05:51:48.199627] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 BaseBdev1_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 [2024-12-12 05:51:48.584542] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:41.260 [2024-12-12 05:51:48.584669] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.260 [2024-12-12 05:51:48.584710] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:41.260 [2024-12-12 05:51:48.584741] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.260 [2024-12-12 05:51:48.586831] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.260 [2024-12-12 05:51:48.586921] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:41.260 BaseBdev1 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 BaseBdev2_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 [2024-12-12 05:51:48.637882] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:41.260 [2024-12-12 05:51:48.637991] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.260 [2024-12-12 05:51:48.638015] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:41.260 [2024-12-12 05:51:48.638027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.260 [2024-12-12 05:51:48.640034] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.260 [2024-12-12 05:51:48.640069] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:41.260 BaseBdev2 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 BaseBdev3_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 [2024-12-12 05:51:48.725365] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:41.260 [2024-12-12 05:51:48.725418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.260 [2024-12-12 05:51:48.725439] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:41.260 [2024-12-12 05:51:48.725449] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.260 [2024-12-12 05:51:48.727449] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.260 [2024-12-12 05:51:48.727492] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:41.260 BaseBdev3 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 BaseBdev4_malloc 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.260 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.260 [2024-12-12 05:51:48.779008] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:41.260 [2024-12-12 05:51:48.779134] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.260 [2024-12-12 05:51:48.779175] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:41.260 [2024-12-12 05:51:48.779212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.520 [2024-12-12 05:51:48.781274] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.521 [2024-12-12 05:51:48.781362] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:41.521 BaseBdev4 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 spare_malloc 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 spare_delay 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 [2024-12-12 05:51:48.843940] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:41.521 [2024-12-12 05:51:48.844051] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.521 [2024-12-12 05:51:48.844086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:41.521 [2024-12-12 05:51:48.844115] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.521 [2024-12-12 05:51:48.846200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.521 [2024-12-12 05:51:48.846269] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:41.521 spare 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 [2024-12-12 05:51:48.855973] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.521 [2024-12-12 05:51:48.857721] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.521 [2024-12-12 05:51:48.857782] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:41.521 [2024-12-12 05:51:48.857831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:41.521 [2024-12-12 05:51:48.857915] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:41.521 [2024-12-12 05:51:48.857931] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:41.521 [2024-12-12 05:51:48.858158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:41.521 [2024-12-12 05:51:48.858334] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:41.521 [2024-12-12 05:51:48.858381] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:41.521 [2024-12-12 05:51:48.858542] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.521 "name": "raid_bdev1", 00:13:41.521 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:41.521 "strip_size_kb": 0, 00:13:41.521 "state": "online", 00:13:41.521 "raid_level": "raid1", 00:13:41.521 "superblock": false, 00:13:41.521 "num_base_bdevs": 4, 00:13:41.521 "num_base_bdevs_discovered": 4, 00:13:41.521 "num_base_bdevs_operational": 4, 00:13:41.521 "base_bdevs_list": [ 00:13:41.521 { 00:13:41.521 "name": "BaseBdev1", 00:13:41.521 "uuid": "509227b4-9fa0-5467-8876-15df09283bcc", 00:13:41.521 "is_configured": true, 00:13:41.521 "data_offset": 0, 00:13:41.521 "data_size": 65536 00:13:41.521 }, 00:13:41.521 { 00:13:41.521 "name": "BaseBdev2", 00:13:41.521 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:41.521 "is_configured": true, 00:13:41.521 "data_offset": 0, 00:13:41.521 "data_size": 65536 00:13:41.521 }, 00:13:41.521 { 00:13:41.521 "name": "BaseBdev3", 00:13:41.521 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:41.521 "is_configured": true, 00:13:41.521 "data_offset": 0, 00:13:41.521 "data_size": 65536 00:13:41.521 }, 00:13:41.521 { 00:13:41.521 "name": "BaseBdev4", 00:13:41.521 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:41.521 "is_configured": true, 00:13:41.521 "data_offset": 0, 00:13:41.521 "data_size": 65536 00:13:41.521 } 00:13:41.521 ] 00:13:41.521 }' 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.521 05:51:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.781 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:41.781 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.781 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:41.781 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:41.781 [2024-12-12 05:51:49.287558] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:41.781 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.041 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:42.041 [2024-12-12 05:51:49.554837] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:42.301 /dev/nbd0 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:42.301 1+0 records in 00:13:42.301 1+0 records out 00:13:42.301 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341417 s, 12.0 MB/s 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:42.301 05:51:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:47.579 65536+0 records in 00:13:47.579 65536+0 records out 00:13:47.579 33554432 bytes (34 MB, 32 MiB) copied, 5.34192 s, 6.3 MB/s 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.579 05:51:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.839 [2024-12-12 05:51:55.187599] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.839 [2024-12-12 05:51:55.199670] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.839 "name": "raid_bdev1", 00:13:47.839 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:47.839 "strip_size_kb": 0, 00:13:47.839 "state": "online", 00:13:47.839 "raid_level": "raid1", 00:13:47.839 "superblock": false, 00:13:47.839 "num_base_bdevs": 4, 00:13:47.839 "num_base_bdevs_discovered": 3, 00:13:47.839 "num_base_bdevs_operational": 3, 00:13:47.839 "base_bdevs_list": [ 00:13:47.839 { 00:13:47.839 "name": null, 00:13:47.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:47.839 "is_configured": false, 00:13:47.839 "data_offset": 0, 00:13:47.839 "data_size": 65536 00:13:47.839 }, 00:13:47.839 { 00:13:47.839 "name": "BaseBdev2", 00:13:47.839 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:47.839 "is_configured": true, 00:13:47.839 "data_offset": 0, 00:13:47.839 "data_size": 65536 00:13:47.839 }, 00:13:47.839 { 00:13:47.839 "name": "BaseBdev3", 00:13:47.839 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:47.839 "is_configured": true, 00:13:47.839 "data_offset": 0, 00:13:47.839 "data_size": 65536 00:13:47.839 }, 00:13:47.839 { 00:13:47.839 "name": "BaseBdev4", 00:13:47.839 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:47.839 "is_configured": true, 00:13:47.839 "data_offset": 0, 00:13:47.839 "data_size": 65536 00:13:47.839 } 00:13:47.839 ] 00:13:47.839 }' 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.839 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.099 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.099 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.099 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.358 [2024-12-12 05:51:55.618940] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.358 [2024-12-12 05:51:55.634402] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:13:48.358 05:51:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.358 05:51:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:48.358 [2024-12-12 05:51:55.636284] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.298 "name": "raid_bdev1", 00:13:49.298 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:49.298 "strip_size_kb": 0, 00:13:49.298 "state": "online", 00:13:49.298 "raid_level": "raid1", 00:13:49.298 "superblock": false, 00:13:49.298 "num_base_bdevs": 4, 00:13:49.298 "num_base_bdevs_discovered": 4, 00:13:49.298 "num_base_bdevs_operational": 4, 00:13:49.298 "process": { 00:13:49.298 "type": "rebuild", 00:13:49.298 "target": "spare", 00:13:49.298 "progress": { 00:13:49.298 "blocks": 20480, 00:13:49.298 "percent": 31 00:13:49.298 } 00:13:49.298 }, 00:13:49.298 "base_bdevs_list": [ 00:13:49.298 { 00:13:49.298 "name": "spare", 00:13:49.298 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:49.298 "is_configured": true, 00:13:49.298 "data_offset": 0, 00:13:49.298 "data_size": 65536 00:13:49.298 }, 00:13:49.298 { 00:13:49.298 "name": "BaseBdev2", 00:13:49.298 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:49.298 "is_configured": true, 00:13:49.298 "data_offset": 0, 00:13:49.298 "data_size": 65536 00:13:49.298 }, 00:13:49.298 { 00:13:49.298 "name": "BaseBdev3", 00:13:49.298 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:49.298 "is_configured": true, 00:13:49.298 "data_offset": 0, 00:13:49.298 "data_size": 65536 00:13:49.298 }, 00:13:49.298 { 00:13:49.298 "name": "BaseBdev4", 00:13:49.298 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:49.298 "is_configured": true, 00:13:49.298 "data_offset": 0, 00:13:49.298 "data_size": 65536 00:13:49.298 } 00:13:49.298 ] 00:13:49.298 }' 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.298 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.298 [2024-12-12 05:51:56.791583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.558 [2024-12-12 05:51:56.840988] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:49.558 [2024-12-12 05:51:56.841043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.558 [2024-12-12 05:51:56.841059] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:49.558 [2024-12-12 05:51:56.841067] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.558 "name": "raid_bdev1", 00:13:49.558 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:49.558 "strip_size_kb": 0, 00:13:49.558 "state": "online", 00:13:49.558 "raid_level": "raid1", 00:13:49.558 "superblock": false, 00:13:49.558 "num_base_bdevs": 4, 00:13:49.558 "num_base_bdevs_discovered": 3, 00:13:49.558 "num_base_bdevs_operational": 3, 00:13:49.558 "base_bdevs_list": [ 00:13:49.558 { 00:13:49.558 "name": null, 00:13:49.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.558 "is_configured": false, 00:13:49.558 "data_offset": 0, 00:13:49.558 "data_size": 65536 00:13:49.558 }, 00:13:49.558 { 00:13:49.558 "name": "BaseBdev2", 00:13:49.558 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:49.558 "is_configured": true, 00:13:49.558 "data_offset": 0, 00:13:49.558 "data_size": 65536 00:13:49.558 }, 00:13:49.558 { 00:13:49.558 "name": "BaseBdev3", 00:13:49.558 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:49.558 "is_configured": true, 00:13:49.558 "data_offset": 0, 00:13:49.558 "data_size": 65536 00:13:49.558 }, 00:13:49.558 { 00:13:49.558 "name": "BaseBdev4", 00:13:49.558 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:49.558 "is_configured": true, 00:13:49.558 "data_offset": 0, 00:13:49.558 "data_size": 65536 00:13:49.558 } 00:13:49.558 ] 00:13:49.558 }' 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.558 05:51:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.818 "name": "raid_bdev1", 00:13:49.818 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:49.818 "strip_size_kb": 0, 00:13:49.818 "state": "online", 00:13:49.818 "raid_level": "raid1", 00:13:49.818 "superblock": false, 00:13:49.818 "num_base_bdevs": 4, 00:13:49.818 "num_base_bdevs_discovered": 3, 00:13:49.818 "num_base_bdevs_operational": 3, 00:13:49.818 "base_bdevs_list": [ 00:13:49.818 { 00:13:49.818 "name": null, 00:13:49.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.818 "is_configured": false, 00:13:49.818 "data_offset": 0, 00:13:49.818 "data_size": 65536 00:13:49.818 }, 00:13:49.818 { 00:13:49.818 "name": "BaseBdev2", 00:13:49.818 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:49.818 "is_configured": true, 00:13:49.818 "data_offset": 0, 00:13:49.818 "data_size": 65536 00:13:49.818 }, 00:13:49.818 { 00:13:49.818 "name": "BaseBdev3", 00:13:49.818 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:49.818 "is_configured": true, 00:13:49.818 "data_offset": 0, 00:13:49.818 "data_size": 65536 00:13:49.818 }, 00:13:49.818 { 00:13:49.818 "name": "BaseBdev4", 00:13:49.818 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:49.818 "is_configured": true, 00:13:49.818 "data_offset": 0, 00:13:49.818 "data_size": 65536 00:13:49.818 } 00:13:49.818 ] 00:13:49.818 }' 00:13:49.818 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.078 [2024-12-12 05:51:57.408232] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:50.078 [2024-12-12 05:51:57.422164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.078 05:51:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:50.078 [2024-12-12 05:51:57.424052] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.017 "name": "raid_bdev1", 00:13:51.017 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:51.017 "strip_size_kb": 0, 00:13:51.017 "state": "online", 00:13:51.017 "raid_level": "raid1", 00:13:51.017 "superblock": false, 00:13:51.017 "num_base_bdevs": 4, 00:13:51.017 "num_base_bdevs_discovered": 4, 00:13:51.017 "num_base_bdevs_operational": 4, 00:13:51.017 "process": { 00:13:51.017 "type": "rebuild", 00:13:51.017 "target": "spare", 00:13:51.017 "progress": { 00:13:51.017 "blocks": 20480, 00:13:51.017 "percent": 31 00:13:51.017 } 00:13:51.017 }, 00:13:51.017 "base_bdevs_list": [ 00:13:51.017 { 00:13:51.017 "name": "spare", 00:13:51.017 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:51.017 "is_configured": true, 00:13:51.017 "data_offset": 0, 00:13:51.017 "data_size": 65536 00:13:51.017 }, 00:13:51.017 { 00:13:51.017 "name": "BaseBdev2", 00:13:51.017 "uuid": "1cb24d5f-802a-5e5c-904a-1c1613c05997", 00:13:51.017 "is_configured": true, 00:13:51.017 "data_offset": 0, 00:13:51.017 "data_size": 65536 00:13:51.017 }, 00:13:51.017 { 00:13:51.017 "name": "BaseBdev3", 00:13:51.017 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:51.017 "is_configured": true, 00:13:51.017 "data_offset": 0, 00:13:51.017 "data_size": 65536 00:13:51.017 }, 00:13:51.017 { 00:13:51.017 "name": "BaseBdev4", 00:13:51.017 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:51.017 "is_configured": true, 00:13:51.017 "data_offset": 0, 00:13:51.017 "data_size": 65536 00:13:51.017 } 00:13:51.017 ] 00:13:51.017 }' 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.017 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.277 [2024-12-12 05:51:58.563277] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:51.277 [2024-12-12 05:51:58.628682] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.277 "name": "raid_bdev1", 00:13:51.277 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:51.277 "strip_size_kb": 0, 00:13:51.277 "state": "online", 00:13:51.277 "raid_level": "raid1", 00:13:51.277 "superblock": false, 00:13:51.277 "num_base_bdevs": 4, 00:13:51.277 "num_base_bdevs_discovered": 3, 00:13:51.277 "num_base_bdevs_operational": 3, 00:13:51.277 "process": { 00:13:51.277 "type": "rebuild", 00:13:51.277 "target": "spare", 00:13:51.277 "progress": { 00:13:51.277 "blocks": 24576, 00:13:51.277 "percent": 37 00:13:51.277 } 00:13:51.277 }, 00:13:51.277 "base_bdevs_list": [ 00:13:51.277 { 00:13:51.277 "name": "spare", 00:13:51.277 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:51.277 "is_configured": true, 00:13:51.277 "data_offset": 0, 00:13:51.277 "data_size": 65536 00:13:51.277 }, 00:13:51.277 { 00:13:51.277 "name": null, 00:13:51.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.277 "is_configured": false, 00:13:51.277 "data_offset": 0, 00:13:51.277 "data_size": 65536 00:13:51.277 }, 00:13:51.277 { 00:13:51.277 "name": "BaseBdev3", 00:13:51.277 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:51.277 "is_configured": true, 00:13:51.277 "data_offset": 0, 00:13:51.277 "data_size": 65536 00:13:51.277 }, 00:13:51.277 { 00:13:51.277 "name": "BaseBdev4", 00:13:51.277 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:51.277 "is_configured": true, 00:13:51.277 "data_offset": 0, 00:13:51.277 "data_size": 65536 00:13:51.277 } 00:13:51.277 ] 00:13:51.277 }' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.277 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=432 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.278 05:51:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.538 "name": "raid_bdev1", 00:13:51.538 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:51.538 "strip_size_kb": 0, 00:13:51.538 "state": "online", 00:13:51.538 "raid_level": "raid1", 00:13:51.538 "superblock": false, 00:13:51.538 "num_base_bdevs": 4, 00:13:51.538 "num_base_bdevs_discovered": 3, 00:13:51.538 "num_base_bdevs_operational": 3, 00:13:51.538 "process": { 00:13:51.538 "type": "rebuild", 00:13:51.538 "target": "spare", 00:13:51.538 "progress": { 00:13:51.538 "blocks": 26624, 00:13:51.538 "percent": 40 00:13:51.538 } 00:13:51.538 }, 00:13:51.538 "base_bdevs_list": [ 00:13:51.538 { 00:13:51.538 "name": "spare", 00:13:51.538 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:51.538 "is_configured": true, 00:13:51.538 "data_offset": 0, 00:13:51.538 "data_size": 65536 00:13:51.538 }, 00:13:51.538 { 00:13:51.538 "name": null, 00:13:51.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.538 "is_configured": false, 00:13:51.538 "data_offset": 0, 00:13:51.538 "data_size": 65536 00:13:51.538 }, 00:13:51.538 { 00:13:51.538 "name": "BaseBdev3", 00:13:51.538 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:51.538 "is_configured": true, 00:13:51.538 "data_offset": 0, 00:13:51.538 "data_size": 65536 00:13:51.538 }, 00:13:51.538 { 00:13:51.538 "name": "BaseBdev4", 00:13:51.538 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:51.538 "is_configured": true, 00:13:51.538 "data_offset": 0, 00:13:51.538 "data_size": 65536 00:13:51.538 } 00:13:51.538 ] 00:13:51.538 }' 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.538 05:51:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.476 "name": "raid_bdev1", 00:13:52.476 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:52.476 "strip_size_kb": 0, 00:13:52.476 "state": "online", 00:13:52.476 "raid_level": "raid1", 00:13:52.476 "superblock": false, 00:13:52.476 "num_base_bdevs": 4, 00:13:52.476 "num_base_bdevs_discovered": 3, 00:13:52.476 "num_base_bdevs_operational": 3, 00:13:52.476 "process": { 00:13:52.476 "type": "rebuild", 00:13:52.476 "target": "spare", 00:13:52.476 "progress": { 00:13:52.476 "blocks": 49152, 00:13:52.476 "percent": 75 00:13:52.476 } 00:13:52.476 }, 00:13:52.476 "base_bdevs_list": [ 00:13:52.476 { 00:13:52.476 "name": "spare", 00:13:52.476 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:52.476 "is_configured": true, 00:13:52.476 "data_offset": 0, 00:13:52.476 "data_size": 65536 00:13:52.476 }, 00:13:52.476 { 00:13:52.476 "name": null, 00:13:52.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.476 "is_configured": false, 00:13:52.476 "data_offset": 0, 00:13:52.476 "data_size": 65536 00:13:52.476 }, 00:13:52.476 { 00:13:52.476 "name": "BaseBdev3", 00:13:52.476 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:52.476 "is_configured": true, 00:13:52.476 "data_offset": 0, 00:13:52.476 "data_size": 65536 00:13:52.476 }, 00:13:52.476 { 00:13:52.476 "name": "BaseBdev4", 00:13:52.476 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:52.476 "is_configured": true, 00:13:52.476 "data_offset": 0, 00:13:52.476 "data_size": 65536 00:13:52.476 } 00:13:52.476 ] 00:13:52.476 }' 00:13:52.476 05:51:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.743 05:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.743 05:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.743 05:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.743 05:52:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.323 [2024-12-12 05:52:00.636224] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:53.323 [2024-12-12 05:52:00.636293] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:53.323 [2024-12-12 05:52:00.636337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.586 "name": "raid_bdev1", 00:13:53.586 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:53.586 "strip_size_kb": 0, 00:13:53.586 "state": "online", 00:13:53.586 "raid_level": "raid1", 00:13:53.586 "superblock": false, 00:13:53.586 "num_base_bdevs": 4, 00:13:53.586 "num_base_bdevs_discovered": 3, 00:13:53.586 "num_base_bdevs_operational": 3, 00:13:53.586 "base_bdevs_list": [ 00:13:53.586 { 00:13:53.586 "name": "spare", 00:13:53.586 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 0, 00:13:53.586 "data_size": 65536 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": null, 00:13:53.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.586 "is_configured": false, 00:13:53.586 "data_offset": 0, 00:13:53.586 "data_size": 65536 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev3", 00:13:53.586 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 0, 00:13:53.586 "data_size": 65536 00:13:53.586 }, 00:13:53.586 { 00:13:53.586 "name": "BaseBdev4", 00:13:53.586 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:53.586 "is_configured": true, 00:13:53.586 "data_offset": 0, 00:13:53.586 "data_size": 65536 00:13:53.586 } 00:13:53.586 ] 00:13:53.586 }' 00:13:53.586 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.846 "name": "raid_bdev1", 00:13:53.846 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:53.846 "strip_size_kb": 0, 00:13:53.846 "state": "online", 00:13:53.846 "raid_level": "raid1", 00:13:53.846 "superblock": false, 00:13:53.846 "num_base_bdevs": 4, 00:13:53.846 "num_base_bdevs_discovered": 3, 00:13:53.846 "num_base_bdevs_operational": 3, 00:13:53.846 "base_bdevs_list": [ 00:13:53.846 { 00:13:53.846 "name": "spare", 00:13:53.846 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 0, 00:13:53.846 "data_size": 65536 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": null, 00:13:53.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.846 "is_configured": false, 00:13:53.846 "data_offset": 0, 00:13:53.846 "data_size": 65536 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": "BaseBdev3", 00:13:53.846 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 0, 00:13:53.846 "data_size": 65536 00:13:53.846 }, 00:13:53.846 { 00:13:53.846 "name": "BaseBdev4", 00:13:53.846 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:53.846 "is_configured": true, 00:13:53.846 "data_offset": 0, 00:13:53.846 "data_size": 65536 00:13:53.846 } 00:13:53.846 ] 00:13:53.846 }' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:53.846 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:53.847 "name": "raid_bdev1", 00:13:53.847 "uuid": "1790a3f9-b6e1-4812-b13c-7779431a9631", 00:13:53.847 "strip_size_kb": 0, 00:13:53.847 "state": "online", 00:13:53.847 "raid_level": "raid1", 00:13:53.847 "superblock": false, 00:13:53.847 "num_base_bdevs": 4, 00:13:53.847 "num_base_bdevs_discovered": 3, 00:13:53.847 "num_base_bdevs_operational": 3, 00:13:53.847 "base_bdevs_list": [ 00:13:53.847 { 00:13:53.847 "name": "spare", 00:13:53.847 "uuid": "9a0068dd-db8f-50ea-a822-9fda2a999ba6", 00:13:53.847 "is_configured": true, 00:13:53.847 "data_offset": 0, 00:13:53.847 "data_size": 65536 00:13:53.847 }, 00:13:53.847 { 00:13:53.847 "name": null, 00:13:53.847 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:53.847 "is_configured": false, 00:13:53.847 "data_offset": 0, 00:13:53.847 "data_size": 65536 00:13:53.847 }, 00:13:53.847 { 00:13:53.847 "name": "BaseBdev3", 00:13:53.847 "uuid": "4bc9e267-4289-5d93-b74e-3f6eeba84f91", 00:13:53.847 "is_configured": true, 00:13:53.847 "data_offset": 0, 00:13:53.847 "data_size": 65536 00:13:53.847 }, 00:13:53.847 { 00:13:53.847 "name": "BaseBdev4", 00:13:53.847 "uuid": "ccd6d59f-b143-5971-a6ee-0925fc0056ab", 00:13:53.847 "is_configured": true, 00:13:53.847 "data_offset": 0, 00:13:53.847 "data_size": 65536 00:13:53.847 } 00:13:53.847 ] 00:13:53.847 }' 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:53.847 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.417 [2024-12-12 05:52:01.695124] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.417 [2024-12-12 05:52:01.695197] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.417 [2024-12-12 05:52:01.695294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:54.417 [2024-12-12 05:52:01.695419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:54.417 [2024-12-12 05:52:01.695468] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.417 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:54.682 /dev/nbd0 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.682 1+0 records in 00:13:54.682 1+0 records out 00:13:54.682 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322384 s, 12.7 MB/s 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:54.682 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.683 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.683 05:52:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:54.683 /dev/nbd1 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:54.683 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:54.945 1+0 records in 00:13:54.945 1+0 records out 00:13:54.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028502 s, 14.4 MB/s 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:54.945 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.205 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.464 05:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 78417 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 78417 ']' 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 78417 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78417 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.465 killing process with pid 78417 00:13:55.465 Received shutdown signal, test time was about 60.000000 seconds 00:13:55.465 00:13:55.465 Latency(us) 00:13:55.465 [2024-12-12T05:52:02.987Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:55.465 [2024-12-12T05:52:02.987Z] =================================================================================================================== 00:13:55.465 [2024-12-12T05:52:02.987Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78417' 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 78417 00:13:55.465 [2024-12-12 05:52:02.848757] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:55.465 05:52:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 78417 00:13:56.034 [2024-12-12 05:52:03.297409] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:56.974 00:13:56.974 real 0m16.695s 00:13:56.974 user 0m18.721s 00:13:56.974 sys 0m2.954s 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.974 ************************************ 00:13:56.974 END TEST raid_rebuild_test 00:13:56.974 ************************************ 00:13:56.974 05:52:04 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:13:56.974 05:52:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:13:56.974 05:52:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:56.974 05:52:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:56.974 ************************************ 00:13:56.974 START TEST raid_rebuild_test_sb 00:13:56.974 ************************************ 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78756 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78756 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78756 ']' 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.974 05:52:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:57.235 [2024-12-12 05:52:04.498659] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:13:57.235 [2024-12-12 05:52:04.498876] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:57.235 Zero copy mechanism will not be used. 00:13:57.235 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78756 ] 00:13:57.235 [2024-12-12 05:52:04.670335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.495 [2024-12-12 05:52:04.774220] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.495 [2024-12-12 05:52:04.965338] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:57.495 [2024-12-12 05:52:04.965386] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 BaseBdev1_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 [2024-12-12 05:52:05.355568] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:58.065 [2024-12-12 05:52:05.355624] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.065 [2024-12-12 05:52:05.355664] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.065 [2024-12-12 05:52:05.355674] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.065 [2024-12-12 05:52:05.357675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.065 [2024-12-12 05:52:05.357796] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.065 BaseBdev1 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 BaseBdev2_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 [2024-12-12 05:52:05.411482] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:58.065 [2024-12-12 05:52:05.411548] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.065 [2024-12-12 05:52:05.411568] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:58.065 [2024-12-12 05:52:05.411578] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.065 [2024-12-12 05:52:05.413602] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.065 [2024-12-12 05:52:05.413637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:58.065 BaseBdev2 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 BaseBdev3_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 [2024-12-12 05:52:05.497036] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:13:58.065 [2024-12-12 05:52:05.497087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.065 [2024-12-12 05:52:05.497123] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:58.065 [2024-12-12 05:52:05.497133] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.065 [2024-12-12 05:52:05.499105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.065 [2024-12-12 05:52:05.499148] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:58.065 BaseBdev3 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 BaseBdev4_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.065 [2024-12-12 05:52:05.553229] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:13:58.065 [2024-12-12 05:52:05.553298] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.065 [2024-12-12 05:52:05.553319] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:58.065 [2024-12-12 05:52:05.553328] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.065 [2024-12-12 05:52:05.555337] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.065 [2024-12-12 05:52:05.555381] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:58.065 BaseBdev4 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.065 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.325 spare_malloc 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.325 spare_delay 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.325 [2024-12-12 05:52:05.618471] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:58.325 [2024-12-12 05:52:05.618536] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.325 [2024-12-12 05:52:05.618552] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:58.325 [2024-12-12 05:52:05.618562] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.325 [2024-12-12 05:52:05.620551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.325 [2024-12-12 05:52:05.620637] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:58.325 spare 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.325 [2024-12-12 05:52:05.630523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:58.325 [2024-12-12 05:52:05.632272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:58.325 [2024-12-12 05:52:05.632388] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:58.325 [2024-12-12 05:52:05.632444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:58.325 [2024-12-12 05:52:05.632664] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:58.325 [2024-12-12 05:52:05.632679] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:58.325 [2024-12-12 05:52:05.632899] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:58.325 [2024-12-12 05:52:05.633054] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:58.325 [2024-12-12 05:52:05.633064] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:58.325 [2024-12-12 05:52:05.633199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:58.325 "name": "raid_bdev1", 00:13:58.325 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:13:58.325 "strip_size_kb": 0, 00:13:58.325 "state": "online", 00:13:58.325 "raid_level": "raid1", 00:13:58.325 "superblock": true, 00:13:58.325 "num_base_bdevs": 4, 00:13:58.325 "num_base_bdevs_discovered": 4, 00:13:58.325 "num_base_bdevs_operational": 4, 00:13:58.325 "base_bdevs_list": [ 00:13:58.325 { 00:13:58.325 "name": "BaseBdev1", 00:13:58.325 "uuid": "8f9e3cb8-8c7c-59af-973e-c8a969c23e0d", 00:13:58.325 "is_configured": true, 00:13:58.325 "data_offset": 2048, 00:13:58.325 "data_size": 63488 00:13:58.325 }, 00:13:58.325 { 00:13:58.325 "name": "BaseBdev2", 00:13:58.325 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:13:58.325 "is_configured": true, 00:13:58.325 "data_offset": 2048, 00:13:58.325 "data_size": 63488 00:13:58.325 }, 00:13:58.325 { 00:13:58.325 "name": "BaseBdev3", 00:13:58.325 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:13:58.325 "is_configured": true, 00:13:58.325 "data_offset": 2048, 00:13:58.325 "data_size": 63488 00:13:58.325 }, 00:13:58.325 { 00:13:58.325 "name": "BaseBdev4", 00:13:58.325 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:13:58.325 "is_configured": true, 00:13:58.325 "data_offset": 2048, 00:13:58.325 "data_size": 63488 00:13:58.325 } 00:13:58.325 ] 00:13:58.325 }' 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:58.325 05:52:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.585 [2024-12-12 05:52:06.034126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:58.585 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:58.845 [2024-12-12 05:52:06.301418] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:58.845 /dev/nbd0 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:13:58.845 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:58.845 1+0 records in 00:13:58.845 1+0 records out 00:13:58.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038401 s, 10.7 MB/s 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:59.105 05:52:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:04.392 63488+0 records in 00:14:04.392 63488+0 records out 00:14:04.392 32505856 bytes (33 MB, 31 MiB) copied, 4.72065 s, 6.9 MB/s 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.392 [2024-12-12 05:52:11.304308] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.392 [2024-12-12 05:52:11.316379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:04.392 "name": "raid_bdev1", 00:14:04.392 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:04.392 "strip_size_kb": 0, 00:14:04.392 "state": "online", 00:14:04.392 "raid_level": "raid1", 00:14:04.392 "superblock": true, 00:14:04.392 "num_base_bdevs": 4, 00:14:04.392 "num_base_bdevs_discovered": 3, 00:14:04.392 "num_base_bdevs_operational": 3, 00:14:04.392 "base_bdevs_list": [ 00:14:04.392 { 00:14:04.392 "name": null, 00:14:04.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:04.392 "is_configured": false, 00:14:04.392 "data_offset": 0, 00:14:04.392 "data_size": 63488 00:14:04.392 }, 00:14:04.392 { 00:14:04.392 "name": "BaseBdev2", 00:14:04.392 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:14:04.392 "is_configured": true, 00:14:04.392 "data_offset": 2048, 00:14:04.392 "data_size": 63488 00:14:04.392 }, 00:14:04.392 { 00:14:04.392 "name": "BaseBdev3", 00:14:04.392 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:04.392 "is_configured": true, 00:14:04.392 "data_offset": 2048, 00:14:04.392 "data_size": 63488 00:14:04.392 }, 00:14:04.392 { 00:14:04.392 "name": "BaseBdev4", 00:14:04.392 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:04.392 "is_configured": true, 00:14:04.392 "data_offset": 2048, 00:14:04.392 "data_size": 63488 00:14:04.392 } 00:14:04.392 ] 00:14:04.392 }' 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:04.392 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.393 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:04.393 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.393 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:04.393 [2024-12-12 05:52:11.771613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:04.393 [2024-12-12 05:52:11.787176] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:04.393 05:52:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.393 [2024-12-12 05:52:11.789018] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:04.393 05:52:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.332 "name": "raid_bdev1", 00:14:05.332 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:05.332 "strip_size_kb": 0, 00:14:05.332 "state": "online", 00:14:05.332 "raid_level": "raid1", 00:14:05.332 "superblock": true, 00:14:05.332 "num_base_bdevs": 4, 00:14:05.332 "num_base_bdevs_discovered": 4, 00:14:05.332 "num_base_bdevs_operational": 4, 00:14:05.332 "process": { 00:14:05.332 "type": "rebuild", 00:14:05.332 "target": "spare", 00:14:05.332 "progress": { 00:14:05.332 "blocks": 20480, 00:14:05.332 "percent": 32 00:14:05.332 } 00:14:05.332 }, 00:14:05.332 "base_bdevs_list": [ 00:14:05.332 { 00:14:05.332 "name": "spare", 00:14:05.332 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:05.332 "is_configured": true, 00:14:05.332 "data_offset": 2048, 00:14:05.332 "data_size": 63488 00:14:05.332 }, 00:14:05.332 { 00:14:05.332 "name": "BaseBdev2", 00:14:05.332 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:14:05.332 "is_configured": true, 00:14:05.332 "data_offset": 2048, 00:14:05.332 "data_size": 63488 00:14:05.332 }, 00:14:05.332 { 00:14:05.332 "name": "BaseBdev3", 00:14:05.332 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:05.332 "is_configured": true, 00:14:05.332 "data_offset": 2048, 00:14:05.332 "data_size": 63488 00:14:05.332 }, 00:14:05.332 { 00:14:05.332 "name": "BaseBdev4", 00:14:05.332 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:05.332 "is_configured": true, 00:14:05.332 "data_offset": 2048, 00:14:05.332 "data_size": 63488 00:14:05.332 } 00:14:05.332 ] 00:14:05.332 }' 00:14:05.332 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.593 05:52:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 [2024-12-12 05:52:12.948768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.593 [2024-12-12 05:52:12.993700] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:05.593 [2024-12-12 05:52:12.993822] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.593 [2024-12-12 05:52:12.993859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:05.593 [2024-12-12 05:52:12.993883] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.593 "name": "raid_bdev1", 00:14:05.593 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:05.593 "strip_size_kb": 0, 00:14:05.593 "state": "online", 00:14:05.593 "raid_level": "raid1", 00:14:05.593 "superblock": true, 00:14:05.593 "num_base_bdevs": 4, 00:14:05.593 "num_base_bdevs_discovered": 3, 00:14:05.593 "num_base_bdevs_operational": 3, 00:14:05.593 "base_bdevs_list": [ 00:14:05.593 { 00:14:05.593 "name": null, 00:14:05.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:05.593 "is_configured": false, 00:14:05.593 "data_offset": 0, 00:14:05.593 "data_size": 63488 00:14:05.593 }, 00:14:05.593 { 00:14:05.593 "name": "BaseBdev2", 00:14:05.593 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:14:05.593 "is_configured": true, 00:14:05.593 "data_offset": 2048, 00:14:05.593 "data_size": 63488 00:14:05.593 }, 00:14:05.593 { 00:14:05.593 "name": "BaseBdev3", 00:14:05.593 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:05.593 "is_configured": true, 00:14:05.593 "data_offset": 2048, 00:14:05.593 "data_size": 63488 00:14:05.593 }, 00:14:05.593 { 00:14:05.593 "name": "BaseBdev4", 00:14:05.593 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:05.593 "is_configured": true, 00:14:05.593 "data_offset": 2048, 00:14:05.593 "data_size": 63488 00:14:05.593 } 00:14:05.593 ] 00:14:05.593 }' 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.593 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.163 "name": "raid_bdev1", 00:14:06.163 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:06.163 "strip_size_kb": 0, 00:14:06.163 "state": "online", 00:14:06.163 "raid_level": "raid1", 00:14:06.163 "superblock": true, 00:14:06.163 "num_base_bdevs": 4, 00:14:06.163 "num_base_bdevs_discovered": 3, 00:14:06.163 "num_base_bdevs_operational": 3, 00:14:06.163 "base_bdevs_list": [ 00:14:06.163 { 00:14:06.163 "name": null, 00:14:06.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:06.163 "is_configured": false, 00:14:06.163 "data_offset": 0, 00:14:06.163 "data_size": 63488 00:14:06.163 }, 00:14:06.163 { 00:14:06.163 "name": "BaseBdev2", 00:14:06.163 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:14:06.163 "is_configured": true, 00:14:06.163 "data_offset": 2048, 00:14:06.163 "data_size": 63488 00:14:06.163 }, 00:14:06.163 { 00:14:06.163 "name": "BaseBdev3", 00:14:06.163 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:06.163 "is_configured": true, 00:14:06.163 "data_offset": 2048, 00:14:06.163 "data_size": 63488 00:14:06.163 }, 00:14:06.163 { 00:14:06.163 "name": "BaseBdev4", 00:14:06.163 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:06.163 "is_configured": true, 00:14:06.163 "data_offset": 2048, 00:14:06.163 "data_size": 63488 00:14:06.163 } 00:14:06.163 ] 00:14:06.163 }' 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:06.163 [2024-12-12 05:52:13.556929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:06.163 [2024-12-12 05:52:13.570930] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.163 05:52:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:06.163 [2024-12-12 05:52:13.572788] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.103 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.363 "name": "raid_bdev1", 00:14:07.363 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:07.363 "strip_size_kb": 0, 00:14:07.363 "state": "online", 00:14:07.363 "raid_level": "raid1", 00:14:07.363 "superblock": true, 00:14:07.363 "num_base_bdevs": 4, 00:14:07.363 "num_base_bdevs_discovered": 4, 00:14:07.363 "num_base_bdevs_operational": 4, 00:14:07.363 "process": { 00:14:07.363 "type": "rebuild", 00:14:07.363 "target": "spare", 00:14:07.363 "progress": { 00:14:07.363 "blocks": 20480, 00:14:07.363 "percent": 32 00:14:07.363 } 00:14:07.363 }, 00:14:07.363 "base_bdevs_list": [ 00:14:07.363 { 00:14:07.363 "name": "spare", 00:14:07.363 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:07.363 "is_configured": true, 00:14:07.363 "data_offset": 2048, 00:14:07.363 "data_size": 63488 00:14:07.363 }, 00:14:07.363 { 00:14:07.363 "name": "BaseBdev2", 00:14:07.363 "uuid": "269964fb-a0f4-5c46-87dc-92c1a8e61278", 00:14:07.363 "is_configured": true, 00:14:07.363 "data_offset": 2048, 00:14:07.363 "data_size": 63488 00:14:07.363 }, 00:14:07.363 { 00:14:07.363 "name": "BaseBdev3", 00:14:07.363 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:07.363 "is_configured": true, 00:14:07.363 "data_offset": 2048, 00:14:07.363 "data_size": 63488 00:14:07.363 }, 00:14:07.363 { 00:14:07.363 "name": "BaseBdev4", 00:14:07.363 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:07.363 "is_configured": true, 00:14:07.363 "data_offset": 2048, 00:14:07.363 "data_size": 63488 00:14:07.363 } 00:14:07.363 ] 00:14:07.363 }' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:07.363 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.363 [2024-12-12 05:52:14.704534] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:07.363 [2024-12-12 05:52:14.877223] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.363 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.623 "name": "raid_bdev1", 00:14:07.623 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:07.623 "strip_size_kb": 0, 00:14:07.623 "state": "online", 00:14:07.623 "raid_level": "raid1", 00:14:07.623 "superblock": true, 00:14:07.623 "num_base_bdevs": 4, 00:14:07.623 "num_base_bdevs_discovered": 3, 00:14:07.623 "num_base_bdevs_operational": 3, 00:14:07.623 "process": { 00:14:07.623 "type": "rebuild", 00:14:07.623 "target": "spare", 00:14:07.623 "progress": { 00:14:07.623 "blocks": 24576, 00:14:07.623 "percent": 38 00:14:07.623 } 00:14:07.623 }, 00:14:07.623 "base_bdevs_list": [ 00:14:07.623 { 00:14:07.623 "name": "spare", 00:14:07.623 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": null, 00:14:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.623 "is_configured": false, 00:14:07.623 "data_offset": 0, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": "BaseBdev3", 00:14:07.623 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": "BaseBdev4", 00:14:07.623 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 } 00:14:07.623 ] 00:14:07.623 }' 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.623 05:52:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=449 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:07.623 "name": "raid_bdev1", 00:14:07.623 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:07.623 "strip_size_kb": 0, 00:14:07.623 "state": "online", 00:14:07.623 "raid_level": "raid1", 00:14:07.623 "superblock": true, 00:14:07.623 "num_base_bdevs": 4, 00:14:07.623 "num_base_bdevs_discovered": 3, 00:14:07.623 "num_base_bdevs_operational": 3, 00:14:07.623 "process": { 00:14:07.623 "type": "rebuild", 00:14:07.623 "target": "spare", 00:14:07.623 "progress": { 00:14:07.623 "blocks": 26624, 00:14:07.623 "percent": 41 00:14:07.623 } 00:14:07.623 }, 00:14:07.623 "base_bdevs_list": [ 00:14:07.623 { 00:14:07.623 "name": "spare", 00:14:07.623 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": null, 00:14:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.623 "is_configured": false, 00:14:07.623 "data_offset": 0, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": "BaseBdev3", 00:14:07.623 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 }, 00:14:07.623 { 00:14:07.623 "name": "BaseBdev4", 00:14:07.623 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:07.623 "is_configured": true, 00:14:07.623 "data_offset": 2048, 00:14:07.623 "data_size": 63488 00:14:07.623 } 00:14:07.623 ] 00:14:07.623 }' 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:07.623 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:07.882 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:07.882 05:52:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:08.821 "name": "raid_bdev1", 00:14:08.821 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:08.821 "strip_size_kb": 0, 00:14:08.821 "state": "online", 00:14:08.821 "raid_level": "raid1", 00:14:08.821 "superblock": true, 00:14:08.821 "num_base_bdevs": 4, 00:14:08.821 "num_base_bdevs_discovered": 3, 00:14:08.821 "num_base_bdevs_operational": 3, 00:14:08.821 "process": { 00:14:08.821 "type": "rebuild", 00:14:08.821 "target": "spare", 00:14:08.821 "progress": { 00:14:08.821 "blocks": 49152, 00:14:08.821 "percent": 77 00:14:08.821 } 00:14:08.821 }, 00:14:08.821 "base_bdevs_list": [ 00:14:08.821 { 00:14:08.821 "name": "spare", 00:14:08.821 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:08.821 "is_configured": true, 00:14:08.821 "data_offset": 2048, 00:14:08.821 "data_size": 63488 00:14:08.821 }, 00:14:08.821 { 00:14:08.821 "name": null, 00:14:08.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:08.821 "is_configured": false, 00:14:08.821 "data_offset": 0, 00:14:08.821 "data_size": 63488 00:14:08.821 }, 00:14:08.821 { 00:14:08.821 "name": "BaseBdev3", 00:14:08.821 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:08.821 "is_configured": true, 00:14:08.821 "data_offset": 2048, 00:14:08.821 "data_size": 63488 00:14:08.821 }, 00:14:08.821 { 00:14:08.821 "name": "BaseBdev4", 00:14:08.821 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:08.821 "is_configured": true, 00:14:08.821 "data_offset": 2048, 00:14:08.821 "data_size": 63488 00:14:08.821 } 00:14:08.821 ] 00:14:08.821 }' 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:08.821 05:52:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:09.390 [2024-12-12 05:52:16.784434] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:09.390 [2024-12-12 05:52:16.784564] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:09.390 [2024-12-12 05:52:16.784717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.960 "name": "raid_bdev1", 00:14:09.960 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:09.960 "strip_size_kb": 0, 00:14:09.960 "state": "online", 00:14:09.960 "raid_level": "raid1", 00:14:09.960 "superblock": true, 00:14:09.960 "num_base_bdevs": 4, 00:14:09.960 "num_base_bdevs_discovered": 3, 00:14:09.960 "num_base_bdevs_operational": 3, 00:14:09.960 "base_bdevs_list": [ 00:14:09.960 { 00:14:09.960 "name": "spare", 00:14:09.960 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:09.960 "is_configured": true, 00:14:09.960 "data_offset": 2048, 00:14:09.960 "data_size": 63488 00:14:09.960 }, 00:14:09.960 { 00:14:09.960 "name": null, 00:14:09.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.960 "is_configured": false, 00:14:09.960 "data_offset": 0, 00:14:09.960 "data_size": 63488 00:14:09.960 }, 00:14:09.960 { 00:14:09.960 "name": "BaseBdev3", 00:14:09.960 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:09.960 "is_configured": true, 00:14:09.960 "data_offset": 2048, 00:14:09.960 "data_size": 63488 00:14:09.960 }, 00:14:09.960 { 00:14:09.960 "name": "BaseBdev4", 00:14:09.960 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:09.960 "is_configured": true, 00:14:09.960 "data_offset": 2048, 00:14:09.960 "data_size": 63488 00:14:09.960 } 00:14:09.960 ] 00:14:09.960 }' 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:09.960 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:09.961 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.221 "name": "raid_bdev1", 00:14:10.221 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:10.221 "strip_size_kb": 0, 00:14:10.221 "state": "online", 00:14:10.221 "raid_level": "raid1", 00:14:10.221 "superblock": true, 00:14:10.221 "num_base_bdevs": 4, 00:14:10.221 "num_base_bdevs_discovered": 3, 00:14:10.221 "num_base_bdevs_operational": 3, 00:14:10.221 "base_bdevs_list": [ 00:14:10.221 { 00:14:10.221 "name": "spare", 00:14:10.221 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": null, 00:14:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.221 "is_configured": false, 00:14:10.221 "data_offset": 0, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": "BaseBdev3", 00:14:10.221 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": "BaseBdev4", 00:14:10.221 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 } 00:14:10.221 ] 00:14:10.221 }' 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:10.221 "name": "raid_bdev1", 00:14:10.221 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:10.221 "strip_size_kb": 0, 00:14:10.221 "state": "online", 00:14:10.221 "raid_level": "raid1", 00:14:10.221 "superblock": true, 00:14:10.221 "num_base_bdevs": 4, 00:14:10.221 "num_base_bdevs_discovered": 3, 00:14:10.221 "num_base_bdevs_operational": 3, 00:14:10.221 "base_bdevs_list": [ 00:14:10.221 { 00:14:10.221 "name": "spare", 00:14:10.221 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": null, 00:14:10.221 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.221 "is_configured": false, 00:14:10.221 "data_offset": 0, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": "BaseBdev3", 00:14:10.221 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 }, 00:14:10.221 { 00:14:10.221 "name": "BaseBdev4", 00:14:10.221 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:10.221 "is_configured": true, 00:14:10.221 "data_offset": 2048, 00:14:10.221 "data_size": 63488 00:14:10.221 } 00:14:10.221 ] 00:14:10.221 }' 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:10.221 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.481 [2024-12-12 05:52:17.990671] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:10.481 [2024-12-12 05:52:17.990745] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:10.481 [2024-12-12 05:52:17.990845] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:10.481 [2024-12-12 05:52:17.990970] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:10.481 [2024-12-12 05:52:17.991017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.481 05:52:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.481 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:10.741 /dev/nbd0 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:10.741 1+0 records in 00:14:10.741 1+0 records out 00:14:10.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292927 s, 14.0 MB/s 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:10.741 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:11.000 /dev/nbd1 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:11.001 1+0 records in 00:14:11.001 1+0 records out 00:14:11.001 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421389 s, 9.7 MB/s 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:11.001 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:11.260 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:11.260 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:11.260 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:11.260 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:11.261 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:11.261 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.261 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.520 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:11.521 05:52:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.521 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.781 [2024-12-12 05:52:19.059603] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.781 [2024-12-12 05:52:19.059663] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.781 [2024-12-12 05:52:19.059688] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:11.781 [2024-12-12 05:52:19.059697] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.781 [2024-12-12 05:52:19.061808] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.781 [2024-12-12 05:52:19.061846] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.781 [2024-12-12 05:52:19.061934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.781 [2024-12-12 05:52:19.061981] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.781 [2024-12-12 05:52:19.062118] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:11.781 [2024-12-12 05:52:19.062211] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:11.781 spare 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.781 [2024-12-12 05:52:19.162108] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:11.781 [2024-12-12 05:52:19.162168] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:11.781 [2024-12-12 05:52:19.162462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:11.781 [2024-12-12 05:52:19.162654] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:11.781 [2024-12-12 05:52:19.162669] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:11.781 [2024-12-12 05:52:19.162817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.781 "name": "raid_bdev1", 00:14:11.781 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:11.781 "strip_size_kb": 0, 00:14:11.781 "state": "online", 00:14:11.781 "raid_level": "raid1", 00:14:11.781 "superblock": true, 00:14:11.781 "num_base_bdevs": 4, 00:14:11.781 "num_base_bdevs_discovered": 3, 00:14:11.781 "num_base_bdevs_operational": 3, 00:14:11.781 "base_bdevs_list": [ 00:14:11.781 { 00:14:11.781 "name": "spare", 00:14:11.781 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:11.781 "is_configured": true, 00:14:11.781 "data_offset": 2048, 00:14:11.781 "data_size": 63488 00:14:11.781 }, 00:14:11.781 { 00:14:11.781 "name": null, 00:14:11.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.781 "is_configured": false, 00:14:11.781 "data_offset": 2048, 00:14:11.781 "data_size": 63488 00:14:11.781 }, 00:14:11.781 { 00:14:11.781 "name": "BaseBdev3", 00:14:11.781 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:11.781 "is_configured": true, 00:14:11.781 "data_offset": 2048, 00:14:11.781 "data_size": 63488 00:14:11.781 }, 00:14:11.781 { 00:14:11.781 "name": "BaseBdev4", 00:14:11.781 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:11.781 "is_configured": true, 00:14:11.781 "data_offset": 2048, 00:14:11.781 "data_size": 63488 00:14:11.781 } 00:14:11.781 ] 00:14:11.781 }' 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.781 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.041 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:12.041 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.041 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:12.300 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:12.300 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.300 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.301 "name": "raid_bdev1", 00:14:12.301 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:12.301 "strip_size_kb": 0, 00:14:12.301 "state": "online", 00:14:12.301 "raid_level": "raid1", 00:14:12.301 "superblock": true, 00:14:12.301 "num_base_bdevs": 4, 00:14:12.301 "num_base_bdevs_discovered": 3, 00:14:12.301 "num_base_bdevs_operational": 3, 00:14:12.301 "base_bdevs_list": [ 00:14:12.301 { 00:14:12.301 "name": "spare", 00:14:12.301 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:12.301 "is_configured": true, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": null, 00:14:12.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.301 "is_configured": false, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": "BaseBdev3", 00:14:12.301 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:12.301 "is_configured": true, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": "BaseBdev4", 00:14:12.301 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:12.301 "is_configured": true, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 } 00:14:12.301 ] 00:14:12.301 }' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 [2024-12-12 05:52:19.750488] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.301 "name": "raid_bdev1", 00:14:12.301 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:12.301 "strip_size_kb": 0, 00:14:12.301 "state": "online", 00:14:12.301 "raid_level": "raid1", 00:14:12.301 "superblock": true, 00:14:12.301 "num_base_bdevs": 4, 00:14:12.301 "num_base_bdevs_discovered": 2, 00:14:12.301 "num_base_bdevs_operational": 2, 00:14:12.301 "base_bdevs_list": [ 00:14:12.301 { 00:14:12.301 "name": null, 00:14:12.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.301 "is_configured": false, 00:14:12.301 "data_offset": 0, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": null, 00:14:12.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.301 "is_configured": false, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": "BaseBdev3", 00:14:12.301 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:12.301 "is_configured": true, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 }, 00:14:12.301 { 00:14:12.301 "name": "BaseBdev4", 00:14:12.301 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:12.301 "is_configured": true, 00:14:12.301 "data_offset": 2048, 00:14:12.301 "data_size": 63488 00:14:12.301 } 00:14:12.301 ] 00:14:12.301 }' 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.301 05:52:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.870 05:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:12.870 05:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.870 05:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:12.870 [2024-12-12 05:52:20.145832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.870 [2024-12-12 05:52:20.146065] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:12.870 [2024-12-12 05:52:20.146088] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:12.870 [2024-12-12 05:52:20.146125] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:12.870 [2024-12-12 05:52:20.160193] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:12.870 05:52:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.870 05:52:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:12.870 [2024-12-12 05:52:20.161998] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.809 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.809 "name": "raid_bdev1", 00:14:13.809 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:13.809 "strip_size_kb": 0, 00:14:13.809 "state": "online", 00:14:13.809 "raid_level": "raid1", 00:14:13.809 "superblock": true, 00:14:13.809 "num_base_bdevs": 4, 00:14:13.809 "num_base_bdevs_discovered": 3, 00:14:13.809 "num_base_bdevs_operational": 3, 00:14:13.809 "process": { 00:14:13.809 "type": "rebuild", 00:14:13.809 "target": "spare", 00:14:13.809 "progress": { 00:14:13.809 "blocks": 20480, 00:14:13.809 "percent": 32 00:14:13.809 } 00:14:13.809 }, 00:14:13.809 "base_bdevs_list": [ 00:14:13.809 { 00:14:13.809 "name": "spare", 00:14:13.809 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:13.809 "is_configured": true, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "name": null, 00:14:13.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.809 "is_configured": false, 00:14:13.809 "data_offset": 2048, 00:14:13.809 "data_size": 63488 00:14:13.809 }, 00:14:13.809 { 00:14:13.809 "name": "BaseBdev3", 00:14:13.810 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:13.810 "is_configured": true, 00:14:13.810 "data_offset": 2048, 00:14:13.810 "data_size": 63488 00:14:13.810 }, 00:14:13.810 { 00:14:13.810 "name": "BaseBdev4", 00:14:13.810 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:13.810 "is_configured": true, 00:14:13.810 "data_offset": 2048, 00:14:13.810 "data_size": 63488 00:14:13.810 } 00:14:13.810 ] 00:14:13.810 }' 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.810 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:13.810 [2024-12-12 05:52:21.301737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.076 [2024-12-12 05:52:21.366806] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.076 [2024-12-12 05:52:21.366858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.076 [2024-12-12 05:52:21.366876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.076 [2024-12-12 05:52:21.366883] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.076 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.077 "name": "raid_bdev1", 00:14:14.077 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:14.077 "strip_size_kb": 0, 00:14:14.077 "state": "online", 00:14:14.077 "raid_level": "raid1", 00:14:14.077 "superblock": true, 00:14:14.077 "num_base_bdevs": 4, 00:14:14.077 "num_base_bdevs_discovered": 2, 00:14:14.077 "num_base_bdevs_operational": 2, 00:14:14.077 "base_bdevs_list": [ 00:14:14.077 { 00:14:14.077 "name": null, 00:14:14.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.077 "is_configured": false, 00:14:14.077 "data_offset": 0, 00:14:14.077 "data_size": 63488 00:14:14.077 }, 00:14:14.077 { 00:14:14.077 "name": null, 00:14:14.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.077 "is_configured": false, 00:14:14.077 "data_offset": 2048, 00:14:14.077 "data_size": 63488 00:14:14.077 }, 00:14:14.077 { 00:14:14.077 "name": "BaseBdev3", 00:14:14.077 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:14.077 "is_configured": true, 00:14:14.077 "data_offset": 2048, 00:14:14.077 "data_size": 63488 00:14:14.077 }, 00:14:14.077 { 00:14:14.077 "name": "BaseBdev4", 00:14:14.077 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:14.077 "is_configured": true, 00:14:14.077 "data_offset": 2048, 00:14:14.077 "data_size": 63488 00:14:14.077 } 00:14:14.077 ] 00:14:14.077 }' 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.077 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.348 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:14.348 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.348 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:14.348 [2024-12-12 05:52:21.798595] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:14.348 [2024-12-12 05:52:21.798705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:14.348 [2024-12-12 05:52:21.798753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:14.348 [2024-12-12 05:52:21.798782] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:14.348 [2024-12-12 05:52:21.799309] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:14.348 [2024-12-12 05:52:21.799375] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:14.348 [2024-12-12 05:52:21.799527] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:14.348 [2024-12-12 05:52:21.799571] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:14.348 [2024-12-12 05:52:21.799638] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:14.348 [2024-12-12 05:52:21.799703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.348 [2024-12-12 05:52:21.812841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:14.348 spare 00:14:14.348 05:52:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.348 [2024-12-12 05:52:21.814718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.348 05:52:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.731 "name": "raid_bdev1", 00:14:15.731 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:15.731 "strip_size_kb": 0, 00:14:15.731 "state": "online", 00:14:15.731 "raid_level": "raid1", 00:14:15.731 "superblock": true, 00:14:15.731 "num_base_bdevs": 4, 00:14:15.731 "num_base_bdevs_discovered": 3, 00:14:15.731 "num_base_bdevs_operational": 3, 00:14:15.731 "process": { 00:14:15.731 "type": "rebuild", 00:14:15.731 "target": "spare", 00:14:15.731 "progress": { 00:14:15.731 "blocks": 20480, 00:14:15.731 "percent": 32 00:14:15.731 } 00:14:15.731 }, 00:14:15.731 "base_bdevs_list": [ 00:14:15.731 { 00:14:15.731 "name": "spare", 00:14:15.731 "uuid": "a8df097e-77bb-56ac-8c05-889c6a7b6d96", 00:14:15.731 "is_configured": true, 00:14:15.731 "data_offset": 2048, 00:14:15.731 "data_size": 63488 00:14:15.731 }, 00:14:15.731 { 00:14:15.731 "name": null, 00:14:15.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.731 "is_configured": false, 00:14:15.731 "data_offset": 2048, 00:14:15.731 "data_size": 63488 00:14:15.731 }, 00:14:15.731 { 00:14:15.731 "name": "BaseBdev3", 00:14:15.731 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:15.731 "is_configured": true, 00:14:15.731 "data_offset": 2048, 00:14:15.731 "data_size": 63488 00:14:15.731 }, 00:14:15.731 { 00:14:15.731 "name": "BaseBdev4", 00:14:15.731 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:15.731 "is_configured": true, 00:14:15.731 "data_offset": 2048, 00:14:15.731 "data_size": 63488 00:14:15.731 } 00:14:15.731 ] 00:14:15.731 }' 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.731 05:52:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.731 [2024-12-12 05:52:22.974526] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.731 [2024-12-12 05:52:23.019293] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:15.731 [2024-12-12 05:52:23.019350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.731 [2024-12-12 05:52:23.019365] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:15.731 [2024-12-12 05:52:23.019374] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.731 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:15.731 "name": "raid_bdev1", 00:14:15.731 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:15.731 "strip_size_kb": 0, 00:14:15.731 "state": "online", 00:14:15.731 "raid_level": "raid1", 00:14:15.731 "superblock": true, 00:14:15.731 "num_base_bdevs": 4, 00:14:15.731 "num_base_bdevs_discovered": 2, 00:14:15.731 "num_base_bdevs_operational": 2, 00:14:15.731 "base_bdevs_list": [ 00:14:15.731 { 00:14:15.731 "name": null, 00:14:15.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.732 "is_configured": false, 00:14:15.732 "data_offset": 0, 00:14:15.732 "data_size": 63488 00:14:15.732 }, 00:14:15.732 { 00:14:15.732 "name": null, 00:14:15.732 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.732 "is_configured": false, 00:14:15.732 "data_offset": 2048, 00:14:15.732 "data_size": 63488 00:14:15.732 }, 00:14:15.732 { 00:14:15.732 "name": "BaseBdev3", 00:14:15.732 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:15.732 "is_configured": true, 00:14:15.732 "data_offset": 2048, 00:14:15.732 "data_size": 63488 00:14:15.732 }, 00:14:15.732 { 00:14:15.732 "name": "BaseBdev4", 00:14:15.732 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:15.732 "is_configured": true, 00:14:15.732 "data_offset": 2048, 00:14:15.732 "data_size": 63488 00:14:15.732 } 00:14:15.732 ] 00:14:15.732 }' 00:14:15.732 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:15.732 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.991 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.251 "name": "raid_bdev1", 00:14:16.251 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:16.251 "strip_size_kb": 0, 00:14:16.251 "state": "online", 00:14:16.251 "raid_level": "raid1", 00:14:16.251 "superblock": true, 00:14:16.251 "num_base_bdevs": 4, 00:14:16.251 "num_base_bdevs_discovered": 2, 00:14:16.251 "num_base_bdevs_operational": 2, 00:14:16.251 "base_bdevs_list": [ 00:14:16.251 { 00:14:16.251 "name": null, 00:14:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.251 "is_configured": false, 00:14:16.251 "data_offset": 0, 00:14:16.251 "data_size": 63488 00:14:16.251 }, 00:14:16.251 { 00:14:16.251 "name": null, 00:14:16.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.251 "is_configured": false, 00:14:16.251 "data_offset": 2048, 00:14:16.251 "data_size": 63488 00:14:16.251 }, 00:14:16.251 { 00:14:16.251 "name": "BaseBdev3", 00:14:16.251 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:16.251 "is_configured": true, 00:14:16.251 "data_offset": 2048, 00:14:16.251 "data_size": 63488 00:14:16.251 }, 00:14:16.251 { 00:14:16.251 "name": "BaseBdev4", 00:14:16.251 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:16.251 "is_configured": true, 00:14:16.251 "data_offset": 2048, 00:14:16.251 "data_size": 63488 00:14:16.251 } 00:14:16.251 ] 00:14:16.251 }' 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:16.251 [2024-12-12 05:52:23.614895] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:16.251 [2024-12-12 05:52:23.614952] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:16.251 [2024-12-12 05:52:23.614972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:16.251 [2024-12-12 05:52:23.614983] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:16.251 [2024-12-12 05:52:23.615411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:16.251 [2024-12-12 05:52:23.615430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:16.251 [2024-12-12 05:52:23.615523] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:16.251 [2024-12-12 05:52:23.615540] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:16.251 [2024-12-12 05:52:23.615548] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:16.251 [2024-12-12 05:52:23.615573] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:16.251 BaseBdev1 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.251 05:52:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:17.189 "name": "raid_bdev1", 00:14:17.189 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:17.189 "strip_size_kb": 0, 00:14:17.189 "state": "online", 00:14:17.189 "raid_level": "raid1", 00:14:17.189 "superblock": true, 00:14:17.189 "num_base_bdevs": 4, 00:14:17.189 "num_base_bdevs_discovered": 2, 00:14:17.189 "num_base_bdevs_operational": 2, 00:14:17.189 "base_bdevs_list": [ 00:14:17.189 { 00:14:17.189 "name": null, 00:14:17.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.189 "is_configured": false, 00:14:17.189 "data_offset": 0, 00:14:17.189 "data_size": 63488 00:14:17.189 }, 00:14:17.189 { 00:14:17.189 "name": null, 00:14:17.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.189 "is_configured": false, 00:14:17.189 "data_offset": 2048, 00:14:17.189 "data_size": 63488 00:14:17.189 }, 00:14:17.189 { 00:14:17.189 "name": "BaseBdev3", 00:14:17.189 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:17.189 "is_configured": true, 00:14:17.189 "data_offset": 2048, 00:14:17.189 "data_size": 63488 00:14:17.189 }, 00:14:17.189 { 00:14:17.189 "name": "BaseBdev4", 00:14:17.189 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:17.189 "is_configured": true, 00:14:17.189 "data_offset": 2048, 00:14:17.189 "data_size": 63488 00:14:17.189 } 00:14:17.189 ] 00:14:17.189 }' 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:17.189 05:52:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:17.758 "name": "raid_bdev1", 00:14:17.758 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:17.758 "strip_size_kb": 0, 00:14:17.758 "state": "online", 00:14:17.758 "raid_level": "raid1", 00:14:17.758 "superblock": true, 00:14:17.758 "num_base_bdevs": 4, 00:14:17.758 "num_base_bdevs_discovered": 2, 00:14:17.758 "num_base_bdevs_operational": 2, 00:14:17.758 "base_bdevs_list": [ 00:14:17.758 { 00:14:17.758 "name": null, 00:14:17.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.758 "is_configured": false, 00:14:17.758 "data_offset": 0, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": null, 00:14:17.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.758 "is_configured": false, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": "BaseBdev3", 00:14:17.758 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:17.758 "is_configured": true, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 }, 00:14:17.758 { 00:14:17.758 "name": "BaseBdev4", 00:14:17.758 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:17.758 "is_configured": true, 00:14:17.758 "data_offset": 2048, 00:14:17.758 "data_size": 63488 00:14:17.758 } 00:14:17.758 ] 00:14:17.758 }' 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:17.758 [2024-12-12 05:52:25.188255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:17.758 [2024-12-12 05:52:25.188450] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:17.758 [2024-12-12 05:52:25.188465] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:17.758 request: 00:14:17.758 { 00:14:17.758 "base_bdev": "BaseBdev1", 00:14:17.758 "raid_bdev": "raid_bdev1", 00:14:17.758 "method": "bdev_raid_add_base_bdev", 00:14:17.758 "req_id": 1 00:14:17.758 } 00:14:17.758 Got JSON-RPC error response 00:14:17.758 response: 00:14:17.758 { 00:14:17.758 "code": -22, 00:14:17.758 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:17.758 } 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:17.758 05:52:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.697 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:18.956 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.956 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:18.956 "name": "raid_bdev1", 00:14:18.956 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:18.956 "strip_size_kb": 0, 00:14:18.956 "state": "online", 00:14:18.956 "raid_level": "raid1", 00:14:18.956 "superblock": true, 00:14:18.956 "num_base_bdevs": 4, 00:14:18.956 "num_base_bdevs_discovered": 2, 00:14:18.956 "num_base_bdevs_operational": 2, 00:14:18.956 "base_bdevs_list": [ 00:14:18.956 { 00:14:18.956 "name": null, 00:14:18.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.956 "is_configured": false, 00:14:18.956 "data_offset": 0, 00:14:18.956 "data_size": 63488 00:14:18.956 }, 00:14:18.956 { 00:14:18.956 "name": null, 00:14:18.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.956 "is_configured": false, 00:14:18.956 "data_offset": 2048, 00:14:18.956 "data_size": 63488 00:14:18.956 }, 00:14:18.956 { 00:14:18.956 "name": "BaseBdev3", 00:14:18.956 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:18.956 "is_configured": true, 00:14:18.956 "data_offset": 2048, 00:14:18.956 "data_size": 63488 00:14:18.956 }, 00:14:18.956 { 00:14:18.956 "name": "BaseBdev4", 00:14:18.956 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:18.956 "is_configured": true, 00:14:18.956 "data_offset": 2048, 00:14:18.956 "data_size": 63488 00:14:18.956 } 00:14:18.956 ] 00:14:18.956 }' 00:14:18.956 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:18.956 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.216 "name": "raid_bdev1", 00:14:19.216 "uuid": "bf5a566e-4aa2-4ed6-ac03-64eddcea3f82", 00:14:19.216 "strip_size_kb": 0, 00:14:19.216 "state": "online", 00:14:19.216 "raid_level": "raid1", 00:14:19.216 "superblock": true, 00:14:19.216 "num_base_bdevs": 4, 00:14:19.216 "num_base_bdevs_discovered": 2, 00:14:19.216 "num_base_bdevs_operational": 2, 00:14:19.216 "base_bdevs_list": [ 00:14:19.216 { 00:14:19.216 "name": null, 00:14:19.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.216 "is_configured": false, 00:14:19.216 "data_offset": 0, 00:14:19.216 "data_size": 63488 00:14:19.216 }, 00:14:19.216 { 00:14:19.216 "name": null, 00:14:19.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:19.216 "is_configured": false, 00:14:19.216 "data_offset": 2048, 00:14:19.216 "data_size": 63488 00:14:19.216 }, 00:14:19.216 { 00:14:19.216 "name": "BaseBdev3", 00:14:19.216 "uuid": "4ff48742-93ca-5eb5-92b7-fe9357b810d8", 00:14:19.216 "is_configured": true, 00:14:19.216 "data_offset": 2048, 00:14:19.216 "data_size": 63488 00:14:19.216 }, 00:14:19.216 { 00:14:19.216 "name": "BaseBdev4", 00:14:19.216 "uuid": "6dcdf704-0335-5e1d-a925-dddd9a84d2a3", 00:14:19.216 "is_configured": true, 00:14:19.216 "data_offset": 2048, 00:14:19.216 "data_size": 63488 00:14:19.216 } 00:14:19.216 ] 00:14:19.216 }' 00:14:19.216 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78756 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78756 ']' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78756 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78756 00:14:19.477 killing process with pid 78756 00:14:19.477 Received shutdown signal, test time was about 60.000000 seconds 00:14:19.477 00:14:19.477 Latency(us) 00:14:19.477 [2024-12-12T05:52:26.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.477 [2024-12-12T05:52:26.999Z] =================================================================================================================== 00:14:19.477 [2024-12-12T05:52:26.999Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78756' 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78756 00:14:19.477 [2024-12-12 05:52:26.801611] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:19.477 [2024-12-12 05:52:26.801717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:19.477 [2024-12-12 05:52:26.801784] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:19.477 05:52:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78756 00:14:19.477 [2024-12-12 05:52:26.801794] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:20.047 [2024-12-12 05:52:27.260284] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:20.987 ************************************ 00:14:20.987 END TEST raid_rebuild_test_sb 00:14:20.987 ************************************ 00:14:20.987 00:14:20.987 real 0m23.895s 00:14:20.987 user 0m29.067s 00:14:20.987 sys 0m3.449s 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:20.987 05:52:28 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:14:20.987 05:52:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:20.987 05:52:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:20.987 05:52:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:20.987 ************************************ 00:14:20.987 START TEST raid_rebuild_test_io 00:14:20.987 ************************************ 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:20.987 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79360 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79360 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 79360 ']' 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:20.988 05:52:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:20.988 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:20.988 Zero copy mechanism will not be used. 00:14:20.988 [2024-12-12 05:52:28.474559] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:14:20.988 [2024-12-12 05:52:28.474688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79360 ] 00:14:21.248 [2024-12-12 05:52:28.645587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.248 [2024-12-12 05:52:28.749495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:21.507 [2024-12-12 05:52:28.936670] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.507 [2024-12-12 05:52:28.936722] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.767 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 BaseBdev1_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 [2024-12-12 05:52:29.327524] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:22.028 [2024-12-12 05:52:29.327587] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.028 [2024-12-12 05:52:29.327610] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:22.028 [2024-12-12 05:52:29.327620] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.028 [2024-12-12 05:52:29.329651] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.028 [2024-12-12 05:52:29.329751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:22.028 BaseBdev1 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 BaseBdev2_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 [2024-12-12 05:52:29.380449] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:22.028 [2024-12-12 05:52:29.380580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.028 [2024-12-12 05:52:29.380617] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:22.028 [2024-12-12 05:52:29.380669] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.028 [2024-12-12 05:52:29.382758] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.028 [2024-12-12 05:52:29.382856] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:22.028 BaseBdev2 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 BaseBdev3_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 [2024-12-12 05:52:29.467999] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:22.028 [2024-12-12 05:52:29.468117] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.028 [2024-12-12 05:52:29.468156] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:22.028 [2024-12-12 05:52:29.468214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.028 [2024-12-12 05:52:29.470255] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.028 [2024-12-12 05:52:29.470340] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:22.028 BaseBdev3 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 BaseBdev4_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.028 [2024-12-12 05:52:29.522548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:22.028 [2024-12-12 05:52:29.522641] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.028 [2024-12-12 05:52:29.522694] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:22.028 [2024-12-12 05:52:29.522724] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.028 [2024-12-12 05:52:29.524726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.028 [2024-12-12 05:52:29.524797] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:22.028 BaseBdev4 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.028 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.288 spare_malloc 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.288 spare_delay 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.288 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.288 [2024-12-12 05:52:29.587460] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:22.288 [2024-12-12 05:52:29.587520] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:22.289 [2024-12-12 05:52:29.587536] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:22.289 [2024-12-12 05:52:29.587546] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:22.289 [2024-12-12 05:52:29.589563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:22.289 [2024-12-12 05:52:29.589658] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:22.289 spare 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.289 [2024-12-12 05:52:29.599495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:22.289 [2024-12-12 05:52:29.601279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.289 [2024-12-12 05:52:29.601338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.289 [2024-12-12 05:52:29.601384] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.289 [2024-12-12 05:52:29.601464] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:22.289 [2024-12-12 05:52:29.601478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:22.289 [2024-12-12 05:52:29.601728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:22.289 [2024-12-12 05:52:29.601896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:22.289 [2024-12-12 05:52:29.601908] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:22.289 [2024-12-12 05:52:29.602066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.289 "name": "raid_bdev1", 00:14:22.289 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:22.289 "strip_size_kb": 0, 00:14:22.289 "state": "online", 00:14:22.289 "raid_level": "raid1", 00:14:22.289 "superblock": false, 00:14:22.289 "num_base_bdevs": 4, 00:14:22.289 "num_base_bdevs_discovered": 4, 00:14:22.289 "num_base_bdevs_operational": 4, 00:14:22.289 "base_bdevs_list": [ 00:14:22.289 { 00:14:22.289 "name": "BaseBdev1", 00:14:22.289 "uuid": "d006dfcf-e7ab-541a-a5b4-0be1a316254a", 00:14:22.289 "is_configured": true, 00:14:22.289 "data_offset": 0, 00:14:22.289 "data_size": 65536 00:14:22.289 }, 00:14:22.289 { 00:14:22.289 "name": "BaseBdev2", 00:14:22.289 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:22.289 "is_configured": true, 00:14:22.289 "data_offset": 0, 00:14:22.289 "data_size": 65536 00:14:22.289 }, 00:14:22.289 { 00:14:22.289 "name": "BaseBdev3", 00:14:22.289 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:22.289 "is_configured": true, 00:14:22.289 "data_offset": 0, 00:14:22.289 "data_size": 65536 00:14:22.289 }, 00:14:22.289 { 00:14:22.289 "name": "BaseBdev4", 00:14:22.289 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:22.289 "is_configured": true, 00:14:22.289 "data_offset": 0, 00:14:22.289 "data_size": 65536 00:14:22.289 } 00:14:22.289 ] 00:14:22.289 }' 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.289 05:52:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.549 [2024-12-12 05:52:30.027025] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.549 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.809 [2024-12-12 05:52:30.114561] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:22.809 "name": "raid_bdev1", 00:14:22.809 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:22.809 "strip_size_kb": 0, 00:14:22.809 "state": "online", 00:14:22.809 "raid_level": "raid1", 00:14:22.809 "superblock": false, 00:14:22.809 "num_base_bdevs": 4, 00:14:22.809 "num_base_bdevs_discovered": 3, 00:14:22.809 "num_base_bdevs_operational": 3, 00:14:22.809 "base_bdevs_list": [ 00:14:22.809 { 00:14:22.809 "name": null, 00:14:22.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.809 "is_configured": false, 00:14:22.809 "data_offset": 0, 00:14:22.809 "data_size": 65536 00:14:22.809 }, 00:14:22.809 { 00:14:22.809 "name": "BaseBdev2", 00:14:22.809 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:22.809 "is_configured": true, 00:14:22.809 "data_offset": 0, 00:14:22.809 "data_size": 65536 00:14:22.809 }, 00:14:22.809 { 00:14:22.809 "name": "BaseBdev3", 00:14:22.809 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:22.809 "is_configured": true, 00:14:22.809 "data_offset": 0, 00:14:22.809 "data_size": 65536 00:14:22.809 }, 00:14:22.809 { 00:14:22.809 "name": "BaseBdev4", 00:14:22.809 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:22.809 "is_configured": true, 00:14:22.809 "data_offset": 0, 00:14:22.809 "data_size": 65536 00:14:22.809 } 00:14:22.809 ] 00:14:22.809 }' 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:22.809 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:22.809 [2024-12-12 05:52:30.214398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:22.809 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:22.809 Zero copy mechanism will not be used. 00:14:22.809 Running I/O for 60 seconds... 00:14:23.069 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:23.069 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.069 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:23.069 [2024-12-12 05:52:30.575224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:23.329 05:52:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.329 05:52:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:23.329 [2024-12-12 05:52:30.646560] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:23.329 [2024-12-12 05:52:30.648423] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:23.329 [2024-12-12 05:52:30.775558] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.329 [2024-12-12 05:52:30.776168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:23.589 [2024-12-12 05:52:30.999101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:23.589 [2024-12-12 05:52:30.999813] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:24.108 166.00 IOPS, 498.00 MiB/s [2024-12-12T05:52:31.630Z] [2024-12-12 05:52:31.448436] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.108 [2024-12-12 05:52:31.449358] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:24.108 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:24.108 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.108 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:24.108 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:24.108 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.439 "name": "raid_bdev1", 00:14:24.439 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:24.439 "strip_size_kb": 0, 00:14:24.439 "state": "online", 00:14:24.439 "raid_level": "raid1", 00:14:24.439 "superblock": false, 00:14:24.439 "num_base_bdevs": 4, 00:14:24.439 "num_base_bdevs_discovered": 4, 00:14:24.439 "num_base_bdevs_operational": 4, 00:14:24.439 "process": { 00:14:24.439 "type": "rebuild", 00:14:24.439 "target": "spare", 00:14:24.439 "progress": { 00:14:24.439 "blocks": 10240, 00:14:24.439 "percent": 15 00:14:24.439 } 00:14:24.439 }, 00:14:24.439 "base_bdevs_list": [ 00:14:24.439 { 00:14:24.439 "name": "spare", 00:14:24.439 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 }, 00:14:24.439 { 00:14:24.439 "name": "BaseBdev2", 00:14:24.439 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 }, 00:14:24.439 { 00:14:24.439 "name": "BaseBdev3", 00:14:24.439 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 }, 00:14:24.439 { 00:14:24.439 "name": "BaseBdev4", 00:14:24.439 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:24.439 "is_configured": true, 00:14:24.439 "data_offset": 0, 00:14:24.439 "data_size": 65536 00:14:24.439 } 00:14:24.439 ] 00:14:24.439 }' 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.439 [2024-12-12 05:52:31.750631] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.439 [2024-12-12 05:52:31.777248] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:24.439 [2024-12-12 05:52:31.777865] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:24.439 [2024-12-12 05:52:31.783972] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:24.439 [2024-12-12 05:52:31.787272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.439 [2024-12-12 05:52:31.787342] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:24.439 [2024-12-12 05:52:31.787373] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:24.439 [2024-12-12 05:52:31.815759] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.439 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.440 "name": "raid_bdev1", 00:14:24.440 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:24.440 "strip_size_kb": 0, 00:14:24.440 "state": "online", 00:14:24.440 "raid_level": "raid1", 00:14:24.440 "superblock": false, 00:14:24.440 "num_base_bdevs": 4, 00:14:24.440 "num_base_bdevs_discovered": 3, 00:14:24.440 "num_base_bdevs_operational": 3, 00:14:24.440 "base_bdevs_list": [ 00:14:24.440 { 00:14:24.440 "name": null, 00:14:24.440 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.440 "is_configured": false, 00:14:24.440 "data_offset": 0, 00:14:24.440 "data_size": 65536 00:14:24.440 }, 00:14:24.440 { 00:14:24.440 "name": "BaseBdev2", 00:14:24.440 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:24.440 "is_configured": true, 00:14:24.440 "data_offset": 0, 00:14:24.440 "data_size": 65536 00:14:24.440 }, 00:14:24.440 { 00:14:24.440 "name": "BaseBdev3", 00:14:24.440 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:24.440 "is_configured": true, 00:14:24.440 "data_offset": 0, 00:14:24.440 "data_size": 65536 00:14:24.440 }, 00:14:24.440 { 00:14:24.440 "name": "BaseBdev4", 00:14:24.440 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:24.440 "is_configured": true, 00:14:24.440 "data_offset": 0, 00:14:24.440 "data_size": 65536 00:14:24.440 } 00:14:24.440 ] 00:14:24.440 }' 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.440 05:52:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.717 169.50 IOPS, 508.50 MiB/s [2024-12-12T05:52:32.239Z] 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.717 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:24.977 "name": "raid_bdev1", 00:14:24.977 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:24.977 "strip_size_kb": 0, 00:14:24.977 "state": "online", 00:14:24.977 "raid_level": "raid1", 00:14:24.977 "superblock": false, 00:14:24.977 "num_base_bdevs": 4, 00:14:24.977 "num_base_bdevs_discovered": 3, 00:14:24.977 "num_base_bdevs_operational": 3, 00:14:24.977 "base_bdevs_list": [ 00:14:24.977 { 00:14:24.977 "name": null, 00:14:24.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.977 "is_configured": false, 00:14:24.977 "data_offset": 0, 00:14:24.977 "data_size": 65536 00:14:24.977 }, 00:14:24.977 { 00:14:24.977 "name": "BaseBdev2", 00:14:24.977 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:24.977 "is_configured": true, 00:14:24.977 "data_offset": 0, 00:14:24.977 "data_size": 65536 00:14:24.977 }, 00:14:24.977 { 00:14:24.977 "name": "BaseBdev3", 00:14:24.977 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:24.977 "is_configured": true, 00:14:24.977 "data_offset": 0, 00:14:24.977 "data_size": 65536 00:14:24.977 }, 00:14:24.977 { 00:14:24.977 "name": "BaseBdev4", 00:14:24.977 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:24.977 "is_configured": true, 00:14:24.977 "data_offset": 0, 00:14:24.977 "data_size": 65536 00:14:24.977 } 00:14:24.977 ] 00:14:24.977 }' 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:24.977 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:24.978 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:24.978 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.978 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:24.978 [2024-12-12 05:52:32.373142] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:24.978 05:52:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.978 05:52:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:24.978 [2024-12-12 05:52:32.431879] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:24.978 [2024-12-12 05:52:32.433858] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:25.237 [2024-12-12 05:52:32.554695] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.237 [2024-12-12 05:52:32.555292] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:25.497 [2024-12-12 05:52:32.765076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:25.497 [2024-12-12 05:52:32.765447] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:25.497 [2024-12-12 05:52:32.991156] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:25.497 [2024-12-12 05:52:32.991759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:25.757 [2024-12-12 05:52:33.208897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:25.757 [2024-12-12 05:52:33.209389] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.017 164.33 IOPS, 493.00 MiB/s [2024-12-12T05:52:33.540Z] 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.018 [2024-12-12 05:52:33.457497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.018 "name": "raid_bdev1", 00:14:26.018 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:26.018 "strip_size_kb": 0, 00:14:26.018 "state": "online", 00:14:26.018 "raid_level": "raid1", 00:14:26.018 "superblock": false, 00:14:26.018 "num_base_bdevs": 4, 00:14:26.018 "num_base_bdevs_discovered": 4, 00:14:26.018 "num_base_bdevs_operational": 4, 00:14:26.018 "process": { 00:14:26.018 "type": "rebuild", 00:14:26.018 "target": "spare", 00:14:26.018 "progress": { 00:14:26.018 "blocks": 12288, 00:14:26.018 "percent": 18 00:14:26.018 } 00:14:26.018 }, 00:14:26.018 "base_bdevs_list": [ 00:14:26.018 { 00:14:26.018 "name": "spare", 00:14:26.018 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:26.018 "is_configured": true, 00:14:26.018 "data_offset": 0, 00:14:26.018 "data_size": 65536 00:14:26.018 }, 00:14:26.018 { 00:14:26.018 "name": "BaseBdev2", 00:14:26.018 "uuid": "0a4f6437-7aec-5170-b610-c6152f74817f", 00:14:26.018 "is_configured": true, 00:14:26.018 "data_offset": 0, 00:14:26.018 "data_size": 65536 00:14:26.018 }, 00:14:26.018 { 00:14:26.018 "name": "BaseBdev3", 00:14:26.018 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:26.018 "is_configured": true, 00:14:26.018 "data_offset": 0, 00:14:26.018 "data_size": 65536 00:14:26.018 }, 00:14:26.018 { 00:14:26.018 "name": "BaseBdev4", 00:14:26.018 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:26.018 "is_configured": true, 00:14:26.018 "data_offset": 0, 00:14:26.018 "data_size": 65536 00:14:26.018 } 00:14:26.018 ] 00:14:26.018 }' 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.018 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.278 [2024-12-12 05:52:33.573757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:26.278 [2024-12-12 05:52:33.586058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:26.278 [2024-12-12 05:52:33.693949] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:26.278 [2024-12-12 05:52:33.694021] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.278 "name": "raid_bdev1", 00:14:26.278 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:26.278 "strip_size_kb": 0, 00:14:26.278 "state": "online", 00:14:26.278 "raid_level": "raid1", 00:14:26.278 "superblock": false, 00:14:26.278 "num_base_bdevs": 4, 00:14:26.278 "num_base_bdevs_discovered": 3, 00:14:26.278 "num_base_bdevs_operational": 3, 00:14:26.278 "process": { 00:14:26.278 "type": "rebuild", 00:14:26.278 "target": "spare", 00:14:26.278 "progress": { 00:14:26.278 "blocks": 16384, 00:14:26.278 "percent": 25 00:14:26.278 } 00:14:26.278 }, 00:14:26.278 "base_bdevs_list": [ 00:14:26.278 { 00:14:26.278 "name": "spare", 00:14:26.278 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:26.278 "is_configured": true, 00:14:26.278 "data_offset": 0, 00:14:26.278 "data_size": 65536 00:14:26.278 }, 00:14:26.278 { 00:14:26.278 "name": null, 00:14:26.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.278 "is_configured": false, 00:14:26.278 "data_offset": 0, 00:14:26.278 "data_size": 65536 00:14:26.278 }, 00:14:26.278 { 00:14:26.278 "name": "BaseBdev3", 00:14:26.278 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:26.278 "is_configured": true, 00:14:26.278 "data_offset": 0, 00:14:26.278 "data_size": 65536 00:14:26.278 }, 00:14:26.278 { 00:14:26.278 "name": "BaseBdev4", 00:14:26.278 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:26.278 "is_configured": true, 00:14:26.278 "data_offset": 0, 00:14:26.278 "data_size": 65536 00:14:26.278 } 00:14:26.278 ] 00:14:26.278 }' 00:14:26.278 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=467 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:26.537 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:26.538 "name": "raid_bdev1", 00:14:26.538 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:26.538 "strip_size_kb": 0, 00:14:26.538 "state": "online", 00:14:26.538 "raid_level": "raid1", 00:14:26.538 "superblock": false, 00:14:26.538 "num_base_bdevs": 4, 00:14:26.538 "num_base_bdevs_discovered": 3, 00:14:26.538 "num_base_bdevs_operational": 3, 00:14:26.538 "process": { 00:14:26.538 "type": "rebuild", 00:14:26.538 "target": "spare", 00:14:26.538 "progress": { 00:14:26.538 "blocks": 18432, 00:14:26.538 "percent": 28 00:14:26.538 } 00:14:26.538 }, 00:14:26.538 "base_bdevs_list": [ 00:14:26.538 { 00:14:26.538 "name": "spare", 00:14:26.538 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:26.538 "is_configured": true, 00:14:26.538 "data_offset": 0, 00:14:26.538 "data_size": 65536 00:14:26.538 }, 00:14:26.538 { 00:14:26.538 "name": null, 00:14:26.538 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.538 "is_configured": false, 00:14:26.538 "data_offset": 0, 00:14:26.538 "data_size": 65536 00:14:26.538 }, 00:14:26.538 { 00:14:26.538 "name": "BaseBdev3", 00:14:26.538 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:26.538 "is_configured": true, 00:14:26.538 "data_offset": 0, 00:14:26.538 "data_size": 65536 00:14:26.538 }, 00:14:26.538 { 00:14:26.538 "name": "BaseBdev4", 00:14:26.538 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:26.538 "is_configured": true, 00:14:26.538 "data_offset": 0, 00:14:26.538 "data_size": 65536 00:14:26.538 } 00:14:26.538 ] 00:14:26.538 }' 00:14:26.538 [2024-12-12 05:52:33.914858] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:26.538 05:52:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:26.538 [2024-12-12 05:52:34.016340] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:26.538 [2024-12-12 05:52:34.016700] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:27.368 151.50 IOPS, 454.50 MiB/s [2024-12-12T05:52:34.890Z] [2024-12-12 05:52:34.585449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.627 05:52:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.627 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.627 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.627 05:52:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.627 05:52:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.628 "name": "raid_bdev1", 00:14:27.628 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:27.628 "strip_size_kb": 0, 00:14:27.628 "state": "online", 00:14:27.628 "raid_level": "raid1", 00:14:27.628 "superblock": false, 00:14:27.628 "num_base_bdevs": 4, 00:14:27.628 "num_base_bdevs_discovered": 3, 00:14:27.628 "num_base_bdevs_operational": 3, 00:14:27.628 "process": { 00:14:27.628 "type": "rebuild", 00:14:27.628 "target": "spare", 00:14:27.628 "progress": { 00:14:27.628 "blocks": 36864, 00:14:27.628 "percent": 56 00:14:27.628 } 00:14:27.628 }, 00:14:27.628 "base_bdevs_list": [ 00:14:27.628 { 00:14:27.628 "name": "spare", 00:14:27.628 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:27.628 "is_configured": true, 00:14:27.628 "data_offset": 0, 00:14:27.628 "data_size": 65536 00:14:27.628 }, 00:14:27.628 { 00:14:27.628 "name": null, 00:14:27.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.628 "is_configured": false, 00:14:27.628 "data_offset": 0, 00:14:27.628 "data_size": 65536 00:14:27.628 }, 00:14:27.628 { 00:14:27.628 "name": "BaseBdev3", 00:14:27.628 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:27.628 "is_configured": true, 00:14:27.628 "data_offset": 0, 00:14:27.628 "data_size": 65536 00:14:27.628 }, 00:14:27.628 { 00:14:27.628 "name": "BaseBdev4", 00:14:27.628 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:27.628 "is_configured": true, 00:14:27.628 "data_offset": 0, 00:14:27.628 "data_size": 65536 00:14:27.628 } 00:14:27.628 ] 00:14:27.628 }' 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.628 05:52:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:27.886 128.80 IOPS, 386.40 MiB/s [2024-12-12T05:52:35.408Z] [2024-12-12 05:52:35.385895] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:27.886 [2024-12-12 05:52:35.386823] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:28.455 [2024-12-12 05:52:35.807515] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:28.715 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.716 "name": "raid_bdev1", 00:14:28.716 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:28.716 "strip_size_kb": 0, 00:14:28.716 "state": "online", 00:14:28.716 "raid_level": "raid1", 00:14:28.716 "superblock": false, 00:14:28.716 "num_base_bdevs": 4, 00:14:28.716 "num_base_bdevs_discovered": 3, 00:14:28.716 "num_base_bdevs_operational": 3, 00:14:28.716 "process": { 00:14:28.716 "type": "rebuild", 00:14:28.716 "target": "spare", 00:14:28.716 "progress": { 00:14:28.716 "blocks": 55296, 00:14:28.716 "percent": 84 00:14:28.716 } 00:14:28.716 }, 00:14:28.716 "base_bdevs_list": [ 00:14:28.716 { 00:14:28.716 "name": "spare", 00:14:28.716 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:28.716 "is_configured": true, 00:14:28.716 "data_offset": 0, 00:14:28.716 "data_size": 65536 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": null, 00:14:28.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.716 "is_configured": false, 00:14:28.716 "data_offset": 0, 00:14:28.716 "data_size": 65536 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": "BaseBdev3", 00:14:28.716 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:28.716 "is_configured": true, 00:14:28.716 "data_offset": 0, 00:14:28.716 "data_size": 65536 00:14:28.716 }, 00:14:28.716 { 00:14:28.716 "name": "BaseBdev4", 00:14:28.716 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:28.716 "is_configured": true, 00:14:28.716 "data_offset": 0, 00:14:28.716 "data_size": 65536 00:14:28.716 } 00:14:28.716 ] 00:14:28.716 }' 00:14:28.716 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.716 113.83 IOPS, 341.50 MiB/s [2024-12-12T05:52:36.238Z] [2024-12-12 05:52:36.235181] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:28.976 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:28.976 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.976 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:28.976 05:52:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:29.544 [2024-12-12 05:52:36.772228] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:29.544 [2024-12-12 05:52:36.872043] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:29.544 [2024-12-12 05:52:36.880137] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.804 101.71 IOPS, 305.14 MiB/s [2024-12-12T05:52:37.326Z] 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.804 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.064 "name": "raid_bdev1", 00:14:30.064 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:30.064 "strip_size_kb": 0, 00:14:30.064 "state": "online", 00:14:30.064 "raid_level": "raid1", 00:14:30.064 "superblock": false, 00:14:30.064 "num_base_bdevs": 4, 00:14:30.064 "num_base_bdevs_discovered": 3, 00:14:30.064 "num_base_bdevs_operational": 3, 00:14:30.064 "base_bdevs_list": [ 00:14:30.064 { 00:14:30.064 "name": "spare", 00:14:30.064 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:30.064 "is_configured": true, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "name": null, 00:14:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.064 "is_configured": false, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "name": "BaseBdev3", 00:14:30.064 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:30.064 "is_configured": true, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "name": "BaseBdev4", 00:14:30.064 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:30.064 "is_configured": true, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 } 00:14:30.064 ] 00:14:30.064 }' 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.064 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.064 "name": "raid_bdev1", 00:14:30.064 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:30.064 "strip_size_kb": 0, 00:14:30.064 "state": "online", 00:14:30.064 "raid_level": "raid1", 00:14:30.064 "superblock": false, 00:14:30.064 "num_base_bdevs": 4, 00:14:30.064 "num_base_bdevs_discovered": 3, 00:14:30.064 "num_base_bdevs_operational": 3, 00:14:30.064 "base_bdevs_list": [ 00:14:30.064 { 00:14:30.064 "name": "spare", 00:14:30.064 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:30.064 "is_configured": true, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "name": null, 00:14:30.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.064 "is_configured": false, 00:14:30.064 "data_offset": 0, 00:14:30.064 "data_size": 65536 00:14:30.064 }, 00:14:30.064 { 00:14:30.064 "name": "BaseBdev3", 00:14:30.065 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:30.065 "is_configured": true, 00:14:30.065 "data_offset": 0, 00:14:30.065 "data_size": 65536 00:14:30.065 }, 00:14:30.065 { 00:14:30.065 "name": "BaseBdev4", 00:14:30.065 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:30.065 "is_configured": true, 00:14:30.065 "data_offset": 0, 00:14:30.065 "data_size": 65536 00:14:30.065 } 00:14:30.065 ] 00:14:30.065 }' 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.065 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.325 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:30.325 "name": "raid_bdev1", 00:14:30.325 "uuid": "6f4b118b-39df-4e14-a13b-80dd840a3b8a", 00:14:30.325 "strip_size_kb": 0, 00:14:30.325 "state": "online", 00:14:30.325 "raid_level": "raid1", 00:14:30.325 "superblock": false, 00:14:30.325 "num_base_bdevs": 4, 00:14:30.325 "num_base_bdevs_discovered": 3, 00:14:30.325 "num_base_bdevs_operational": 3, 00:14:30.325 "base_bdevs_list": [ 00:14:30.325 { 00:14:30.325 "name": "spare", 00:14:30.325 "uuid": "727e9904-d6c7-5d38-baa1-15186ea40feb", 00:14:30.325 "is_configured": true, 00:14:30.325 "data_offset": 0, 00:14:30.325 "data_size": 65536 00:14:30.325 }, 00:14:30.325 { 00:14:30.325 "name": null, 00:14:30.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.325 "is_configured": false, 00:14:30.325 "data_offset": 0, 00:14:30.325 "data_size": 65536 00:14:30.325 }, 00:14:30.325 { 00:14:30.325 "name": "BaseBdev3", 00:14:30.325 "uuid": "f3c23ee7-561a-563f-b9f2-40ad3b037f25", 00:14:30.325 "is_configured": true, 00:14:30.325 "data_offset": 0, 00:14:30.325 "data_size": 65536 00:14:30.325 }, 00:14:30.325 { 00:14:30.325 "name": "BaseBdev4", 00:14:30.325 "uuid": "3c44910c-5dde-545a-946a-69494ddf787a", 00:14:30.325 "is_configured": true, 00:14:30.325 "data_offset": 0, 00:14:30.325 "data_size": 65536 00:14:30.325 } 00:14:30.325 ] 00:14:30.325 }' 00:14:30.325 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:30.325 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.585 [2024-12-12 05:52:37.931750] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:30.585 [2024-12-12 05:52:37.931839] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:30.585 00:14:30.585 Latency(us) 00:14:30.585 [2024-12-12T05:52:38.107Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:30.585 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:30.585 raid_bdev1 : 7.76 94.61 283.83 0.00 0.00 14430.94 329.11 116762.83 00:14:30.585 [2024-12-12T05:52:38.107Z] =================================================================================================================== 00:14:30.585 [2024-12-12T05:52:38.107Z] Total : 94.61 283.83 0.00 0.00 14430.94 329.11 116762.83 00:14:30.585 [2024-12-12 05:52:37.979199] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:30.585 [2024-12-12 05:52:37.979318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:30.585 [2024-12-12 05:52:37.979459] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:30.585 [2024-12-12 05:52:37.979572] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:30.585 { 00:14:30.585 "results": [ 00:14:30.585 { 00:14:30.585 "job": "raid_bdev1", 00:14:30.585 "core_mask": "0x1", 00:14:30.585 "workload": "randrw", 00:14:30.585 "percentage": 50, 00:14:30.585 "status": "finished", 00:14:30.585 "queue_depth": 2, 00:14:30.585 "io_size": 3145728, 00:14:30.585 "runtime": 7.758227, 00:14:30.585 "iops": 94.609245127785, 00:14:30.585 "mibps": 283.827735383355, 00:14:30.585 "io_failed": 0, 00:14:30.585 "io_timeout": 0, 00:14:30.585 "avg_latency_us": 14430.943245719453, 00:14:30.585 "min_latency_us": 329.1109170305677, 00:14:30.585 "max_latency_us": 116762.82969432314 00:14:30.585 } 00:14:30.585 ], 00:14:30.585 "core_count": 1 00:14:30.585 } 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:30.585 05:52:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.585 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:30.845 /dev/nbd0 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:30.845 1+0 records in 00:14:30.845 1+0 records out 00:14:30.845 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000301797 s, 13.6 MB/s 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:30.845 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:31.105 /dev/nbd1 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.105 1+0 records in 00:14:31.105 1+0 records out 00:14:31.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280586 s, 14.6 MB/s 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.105 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:31.364 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:31.364 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.364 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:31.365 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.365 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:31.365 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.365 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.624 05:52:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:31.624 /dev/nbd1 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:31.624 1+0 records in 00:14:31.624 1+0 records out 00:14:31.624 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019395 s, 21.1 MB/s 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:31.624 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.884 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 79360 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 79360 ']' 00:14:32.144 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 79360 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79360 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79360' 00:14:32.404 killing process with pid 79360 00:14:32.404 Received shutdown signal, test time was about 9.509156 seconds 00:14:32.404 00:14:32.404 Latency(us) 00:14:32.404 [2024-12-12T05:52:39.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:32.404 [2024-12-12T05:52:39.926Z] =================================================================================================================== 00:14:32.404 [2024-12-12T05:52:39.926Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 79360 00:14:32.404 [2024-12-12 05:52:39.707180] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:32.404 05:52:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 79360 00:14:32.664 [2024-12-12 05:52:40.091443] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.045 05:52:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:34.045 ************************************ 00:14:34.045 END TEST raid_rebuild_test_io 00:14:34.045 ************************************ 00:14:34.045 00:14:34.045 real 0m12.809s 00:14:34.045 user 0m16.093s 00:14:34.045 sys 0m1.790s 00:14:34.045 05:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:34.045 05:52:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.045 05:52:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:14:34.045 05:52:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:34.045 05:52:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:34.045 05:52:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.046 ************************************ 00:14:34.046 START TEST raid_rebuild_test_sb_io 00:14:34.046 ************************************ 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79690 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79690 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79690 ']' 00:14:34.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:34.046 05:52:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.046 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:34.046 Zero copy mechanism will not be used. 00:14:34.046 [2024-12-12 05:52:41.363242] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:14:34.046 [2024-12-12 05:52:41.363361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79690 ] 00:14:34.046 [2024-12-12 05:52:41.520194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.306 [2024-12-12 05:52:41.623554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:34.306 [2024-12-12 05:52:41.810866] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.306 [2024-12-12 05:52:41.811000] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 BaseBdev1_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 [2024-12-12 05:52:42.219094] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:34.875 [2024-12-12 05:52:42.219173] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.875 [2024-12-12 05:52:42.219195] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:34.875 [2024-12-12 05:52:42.219206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.875 [2024-12-12 05:52:42.221249] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.875 [2024-12-12 05:52:42.221289] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:34.875 BaseBdev1 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 BaseBdev2_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 [2024-12-12 05:52:42.270835] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:34.875 [2024-12-12 05:52:42.270959] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.875 [2024-12-12 05:52:42.270982] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:34.875 [2024-12-12 05:52:42.270992] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.875 [2024-12-12 05:52:42.273050] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.875 [2024-12-12 05:52:42.273099] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:34.875 BaseBdev2 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 BaseBdev3_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:34.875 [2024-12-12 05:52:42.355841] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:34.875 [2024-12-12 05:52:42.355943] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:34.875 [2024-12-12 05:52:42.355967] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:34.875 [2024-12-12 05:52:42.355977] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:34.875 [2024-12-12 05:52:42.357956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:34.875 [2024-12-12 05:52:42.357996] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:34.875 BaseBdev3 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.875 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 BaseBdev4_malloc 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 [2024-12-12 05:52:42.410631] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:35.146 [2024-12-12 05:52:42.410685] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.146 [2024-12-12 05:52:42.410719] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:35.146 [2024-12-12 05:52:42.410729] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.146 [2024-12-12 05:52:42.412729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.146 [2024-12-12 05:52:42.412801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:35.146 BaseBdev4 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 spare_malloc 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 spare_delay 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 [2024-12-12 05:52:42.476098] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:35.146 [2024-12-12 05:52:42.476147] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:35.146 [2024-12-12 05:52:42.476164] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:35.146 [2024-12-12 05:52:42.476173] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:35.146 [2024-12-12 05:52:42.478157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:35.146 [2024-12-12 05:52:42.478239] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:35.146 spare 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 [2024-12-12 05:52:42.488132] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:35.146 [2024-12-12 05:52:42.489838] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:35.146 [2024-12-12 05:52:42.489899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:35.146 [2024-12-12 05:52:42.489948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:35.146 [2024-12-12 05:52:42.490124] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:35.146 [2024-12-12 05:52:42.490137] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:35.146 [2024-12-12 05:52:42.490356] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:35.146 [2024-12-12 05:52:42.490541] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:35.146 [2024-12-12 05:52:42.490557] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:35.146 [2024-12-12 05:52:42.490681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.146 "name": "raid_bdev1", 00:14:35.146 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:35.146 "strip_size_kb": 0, 00:14:35.146 "state": "online", 00:14:35.146 "raid_level": "raid1", 00:14:35.146 "superblock": true, 00:14:35.146 "num_base_bdevs": 4, 00:14:35.146 "num_base_bdevs_discovered": 4, 00:14:35.146 "num_base_bdevs_operational": 4, 00:14:35.146 "base_bdevs_list": [ 00:14:35.146 { 00:14:35.146 "name": "BaseBdev1", 00:14:35.146 "uuid": "ae2202cf-3171-54ca-8661-f47fbfde4018", 00:14:35.146 "is_configured": true, 00:14:35.146 "data_offset": 2048, 00:14:35.146 "data_size": 63488 00:14:35.146 }, 00:14:35.146 { 00:14:35.146 "name": "BaseBdev2", 00:14:35.146 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:35.146 "is_configured": true, 00:14:35.146 "data_offset": 2048, 00:14:35.146 "data_size": 63488 00:14:35.146 }, 00:14:35.146 { 00:14:35.146 "name": "BaseBdev3", 00:14:35.146 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:35.146 "is_configured": true, 00:14:35.146 "data_offset": 2048, 00:14:35.146 "data_size": 63488 00:14:35.146 }, 00:14:35.146 { 00:14:35.146 "name": "BaseBdev4", 00:14:35.146 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:35.146 "is_configured": true, 00:14:35.146 "data_offset": 2048, 00:14:35.146 "data_size": 63488 00:14:35.146 } 00:14:35.146 ] 00:14:35.146 }' 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.146 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.428 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:35.428 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.428 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.428 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:35.428 [2024-12-12 05:52:42.943684] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.706 05:52:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.706 [2024-12-12 05:52:43.043155] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:35.706 "name": "raid_bdev1", 00:14:35.706 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:35.706 "strip_size_kb": 0, 00:14:35.706 "state": "online", 00:14:35.706 "raid_level": "raid1", 00:14:35.706 "superblock": true, 00:14:35.706 "num_base_bdevs": 4, 00:14:35.706 "num_base_bdevs_discovered": 3, 00:14:35.706 "num_base_bdevs_operational": 3, 00:14:35.706 "base_bdevs_list": [ 00:14:35.706 { 00:14:35.706 "name": null, 00:14:35.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.706 "is_configured": false, 00:14:35.706 "data_offset": 0, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev2", 00:14:35.706 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev3", 00:14:35.706 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 }, 00:14:35.706 { 00:14:35.706 "name": "BaseBdev4", 00:14:35.706 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:35.706 "is_configured": true, 00:14:35.706 "data_offset": 2048, 00:14:35.706 "data_size": 63488 00:14:35.706 } 00:14:35.706 ] 00:14:35.706 }' 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:35.706 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.706 [2024-12-12 05:52:43.117843] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:35.706 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.706 Zero copy mechanism will not be used. 00:14:35.706 Running I/O for 60 seconds... 00:14:35.979 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:35.979 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.979 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:35.979 [2024-12-12 05:52:43.459050] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:35.979 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.979 05:52:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:36.239 [2024-12-12 05:52:43.511328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:14:36.239 [2024-12-12 05:52:43.513240] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:36.239 [2024-12-12 05:52:43.623072] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.239 [2024-12-12 05:52:43.623766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:36.239 [2024-12-12 05:52:43.747037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.239 [2024-12-12 05:52:43.747452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:36.809 [2024-12-12 05:52:44.088199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:36.809 170.00 IOPS, 510.00 MiB/s [2024-12-12T05:52:44.331Z] [2024-12-12 05:52:44.202754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.069 "name": "raid_bdev1", 00:14:37.069 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:37.069 "strip_size_kb": 0, 00:14:37.069 "state": "online", 00:14:37.069 "raid_level": "raid1", 00:14:37.069 "superblock": true, 00:14:37.069 "num_base_bdevs": 4, 00:14:37.069 "num_base_bdevs_discovered": 4, 00:14:37.069 "num_base_bdevs_operational": 4, 00:14:37.069 "process": { 00:14:37.069 "type": "rebuild", 00:14:37.069 "target": "spare", 00:14:37.069 "progress": { 00:14:37.069 "blocks": 12288, 00:14:37.069 "percent": 19 00:14:37.069 } 00:14:37.069 }, 00:14:37.069 "base_bdevs_list": [ 00:14:37.069 { 00:14:37.069 "name": "spare", 00:14:37.069 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:37.069 "is_configured": true, 00:14:37.069 "data_offset": 2048, 00:14:37.069 "data_size": 63488 00:14:37.069 }, 00:14:37.069 { 00:14:37.069 "name": "BaseBdev2", 00:14:37.069 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:37.069 "is_configured": true, 00:14:37.069 "data_offset": 2048, 00:14:37.069 "data_size": 63488 00:14:37.069 }, 00:14:37.069 { 00:14:37.069 "name": "BaseBdev3", 00:14:37.069 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:37.069 "is_configured": true, 00:14:37.069 "data_offset": 2048, 00:14:37.069 "data_size": 63488 00:14:37.069 }, 00:14:37.069 { 00:14:37.069 "name": "BaseBdev4", 00:14:37.069 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:37.069 "is_configured": true, 00:14:37.069 "data_offset": 2048, 00:14:37.069 "data_size": 63488 00:14:37.069 } 00:14:37.069 ] 00:14:37.069 }' 00:14:37.069 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.069 [2024-12-12 05:52:44.554815] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.069 [2024-12-12 05:52:44.555390] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.329 [2024-12-12 05:52:44.654587] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.329 [2024-12-12 05:52:44.657547] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.329 [2024-12-12 05:52:44.658769] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:37.329 [2024-12-12 05:52:44.760137] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:37.329 [2024-12-12 05:52:44.776821] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.329 [2024-12-12 05:52:44.776929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:37.329 [2024-12-12 05:52:44.776961] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:37.329 [2024-12-12 05:52:44.799880] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:37.329 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.330 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.589 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.589 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.589 "name": "raid_bdev1", 00:14:37.589 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:37.589 "strip_size_kb": 0, 00:14:37.589 "state": "online", 00:14:37.589 "raid_level": "raid1", 00:14:37.589 "superblock": true, 00:14:37.589 "num_base_bdevs": 4, 00:14:37.589 "num_base_bdevs_discovered": 3, 00:14:37.589 "num_base_bdevs_operational": 3, 00:14:37.589 "base_bdevs_list": [ 00:14:37.589 { 00:14:37.589 "name": null, 00:14:37.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.589 "is_configured": false, 00:14:37.589 "data_offset": 0, 00:14:37.589 "data_size": 63488 00:14:37.589 }, 00:14:37.589 { 00:14:37.589 "name": "BaseBdev2", 00:14:37.589 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:37.589 "is_configured": true, 00:14:37.589 "data_offset": 2048, 00:14:37.589 "data_size": 63488 00:14:37.589 }, 00:14:37.589 { 00:14:37.589 "name": "BaseBdev3", 00:14:37.589 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:37.589 "is_configured": true, 00:14:37.589 "data_offset": 2048, 00:14:37.589 "data_size": 63488 00:14:37.589 }, 00:14:37.589 { 00:14:37.589 "name": "BaseBdev4", 00:14:37.589 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:37.589 "is_configured": true, 00:14:37.589 "data_offset": 2048, 00:14:37.589 "data_size": 63488 00:14:37.589 } 00:14:37.589 ] 00:14:37.589 }' 00:14:37.589 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.589 05:52:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.848 144.50 IOPS, 433.50 MiB/s [2024-12-12T05:52:45.370Z] 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:37.848 "name": "raid_bdev1", 00:14:37.848 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:37.848 "strip_size_kb": 0, 00:14:37.848 "state": "online", 00:14:37.848 "raid_level": "raid1", 00:14:37.848 "superblock": true, 00:14:37.848 "num_base_bdevs": 4, 00:14:37.848 "num_base_bdevs_discovered": 3, 00:14:37.848 "num_base_bdevs_operational": 3, 00:14:37.848 "base_bdevs_list": [ 00:14:37.848 { 00:14:37.848 "name": null, 00:14:37.848 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.848 "is_configured": false, 00:14:37.848 "data_offset": 0, 00:14:37.848 "data_size": 63488 00:14:37.848 }, 00:14:37.848 { 00:14:37.848 "name": "BaseBdev2", 00:14:37.848 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:37.848 "is_configured": true, 00:14:37.848 "data_offset": 2048, 00:14:37.848 "data_size": 63488 00:14:37.848 }, 00:14:37.848 { 00:14:37.848 "name": "BaseBdev3", 00:14:37.848 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:37.848 "is_configured": true, 00:14:37.848 "data_offset": 2048, 00:14:37.848 "data_size": 63488 00:14:37.848 }, 00:14:37.848 { 00:14:37.848 "name": "BaseBdev4", 00:14:37.848 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:37.848 "is_configured": true, 00:14:37.848 "data_offset": 2048, 00:14:37.848 "data_size": 63488 00:14:37.848 } 00:14:37.848 ] 00:14:37.848 }' 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:37.848 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:38.107 [2024-12-12 05:52:45.396604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.107 05:52:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:38.107 [2024-12-12 05:52:45.454183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:14:38.107 [2024-12-12 05:52:45.456120] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:38.107 [2024-12-12 05:52:45.558615] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.107 [2024-12-12 05:52:45.559126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:38.367 [2024-12-12 05:52:45.767323] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.367 [2024-12-12 05:52:45.768137] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:38.626 [2024-12-12 05:52:46.112453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:38.886 148.33 IOPS, 445.00 MiB/s [2024-12-12T05:52:46.408Z] [2024-12-12 05:52:46.335821] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.145 "name": "raid_bdev1", 00:14:39.145 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:39.145 "strip_size_kb": 0, 00:14:39.145 "state": "online", 00:14:39.145 "raid_level": "raid1", 00:14:39.145 "superblock": true, 00:14:39.145 "num_base_bdevs": 4, 00:14:39.145 "num_base_bdevs_discovered": 4, 00:14:39.145 "num_base_bdevs_operational": 4, 00:14:39.145 "process": { 00:14:39.145 "type": "rebuild", 00:14:39.145 "target": "spare", 00:14:39.145 "progress": { 00:14:39.145 "blocks": 10240, 00:14:39.145 "percent": 16 00:14:39.145 } 00:14:39.145 }, 00:14:39.145 "base_bdevs_list": [ 00:14:39.145 { 00:14:39.145 "name": "spare", 00:14:39.145 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:39.145 "is_configured": true, 00:14:39.145 "data_offset": 2048, 00:14:39.145 "data_size": 63488 00:14:39.145 }, 00:14:39.145 { 00:14:39.145 "name": "BaseBdev2", 00:14:39.145 "uuid": "0d78bc49-ae2a-5e2c-8d33-f61df79489b4", 00:14:39.145 "is_configured": true, 00:14:39.145 "data_offset": 2048, 00:14:39.145 "data_size": 63488 00:14:39.145 }, 00:14:39.145 { 00:14:39.145 "name": "BaseBdev3", 00:14:39.145 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:39.145 "is_configured": true, 00:14:39.145 "data_offset": 2048, 00:14:39.145 "data_size": 63488 00:14:39.145 }, 00:14:39.145 { 00:14:39.145 "name": "BaseBdev4", 00:14:39.145 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:39.145 "is_configured": true, 00:14:39.145 "data_offset": 2048, 00:14:39.145 "data_size": 63488 00:14:39.145 } 00:14:39.145 ] 00:14:39.145 }' 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.145 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:39.146 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.146 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.146 [2024-12-12 05:52:46.584653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:39.406 [2024-12-12 05:52:46.668489] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:39.406 [2024-12-12 05:52:46.875360] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:14:39.406 [2024-12-12 05:52:46.875445] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.406 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.666 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.666 "name": "raid_bdev1", 00:14:39.666 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:39.666 "strip_size_kb": 0, 00:14:39.666 "state": "online", 00:14:39.666 "raid_level": "raid1", 00:14:39.666 "superblock": true, 00:14:39.666 "num_base_bdevs": 4, 00:14:39.666 "num_base_bdevs_discovered": 3, 00:14:39.666 "num_base_bdevs_operational": 3, 00:14:39.666 "process": { 00:14:39.666 "type": "rebuild", 00:14:39.666 "target": "spare", 00:14:39.666 "progress": { 00:14:39.666 "blocks": 14336, 00:14:39.666 "percent": 22 00:14:39.666 } 00:14:39.666 }, 00:14:39.666 "base_bdevs_list": [ 00:14:39.666 { 00:14:39.666 "name": "spare", 00:14:39.666 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:39.666 "is_configured": true, 00:14:39.666 "data_offset": 2048, 00:14:39.666 "data_size": 63488 00:14:39.666 }, 00:14:39.666 { 00:14:39.666 "name": null, 00:14:39.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.666 "is_configured": false, 00:14:39.666 "data_offset": 0, 00:14:39.666 "data_size": 63488 00:14:39.666 }, 00:14:39.666 { 00:14:39.666 "name": "BaseBdev3", 00:14:39.666 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:39.666 "is_configured": true, 00:14:39.666 "data_offset": 2048, 00:14:39.666 "data_size": 63488 00:14:39.666 }, 00:14:39.666 { 00:14:39.666 "name": "BaseBdev4", 00:14:39.666 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:39.666 "is_configured": true, 00:14:39.666 "data_offset": 2048, 00:14:39.666 "data_size": 63488 00:14:39.666 } 00:14:39.666 ] 00:14:39.666 }' 00:14:39.666 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.666 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.666 05:52:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.666 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.666 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=481 00:14:39.666 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:39.666 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:39.667 "name": "raid_bdev1", 00:14:39.667 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:39.667 "strip_size_kb": 0, 00:14:39.667 "state": "online", 00:14:39.667 "raid_level": "raid1", 00:14:39.667 "superblock": true, 00:14:39.667 "num_base_bdevs": 4, 00:14:39.667 "num_base_bdevs_discovered": 3, 00:14:39.667 "num_base_bdevs_operational": 3, 00:14:39.667 "process": { 00:14:39.667 "type": "rebuild", 00:14:39.667 "target": "spare", 00:14:39.667 "progress": { 00:14:39.667 "blocks": 16384, 00:14:39.667 "percent": 25 00:14:39.667 } 00:14:39.667 }, 00:14:39.667 "base_bdevs_list": [ 00:14:39.667 { 00:14:39.667 "name": "spare", 00:14:39.667 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:39.667 "is_configured": true, 00:14:39.667 "data_offset": 2048, 00:14:39.667 "data_size": 63488 00:14:39.667 }, 00:14:39.667 { 00:14:39.667 "name": null, 00:14:39.667 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.667 "is_configured": false, 00:14:39.667 "data_offset": 0, 00:14:39.667 "data_size": 63488 00:14:39.667 }, 00:14:39.667 { 00:14:39.667 "name": "BaseBdev3", 00:14:39.667 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:39.667 "is_configured": true, 00:14:39.667 "data_offset": 2048, 00:14:39.667 "data_size": 63488 00:14:39.667 }, 00:14:39.667 { 00:14:39.667 "name": "BaseBdev4", 00:14:39.667 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:39.667 "is_configured": true, 00:14:39.667 "data_offset": 2048, 00:14:39.667 "data_size": 63488 00:14:39.667 } 00:14:39.667 ] 00:14:39.667 }' 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:39.667 133.25 IOPS, 399.75 MiB/s [2024-12-12T05:52:47.189Z] 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:39.667 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:39.927 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:39.927 05:52:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.186 [2024-12-12 05:52:47.543325] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:40.445 [2024-12-12 05:52:47.758368] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:40.704 [2024-12-12 05:52:47.986264] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:40.704 116.60 IOPS, 349.80 MiB/s [2024-12-12T05:52:48.226Z] 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.704 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:40.964 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.964 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:40.964 "name": "raid_bdev1", 00:14:40.964 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:40.964 "strip_size_kb": 0, 00:14:40.964 "state": "online", 00:14:40.964 "raid_level": "raid1", 00:14:40.964 "superblock": true, 00:14:40.964 "num_base_bdevs": 4, 00:14:40.964 "num_base_bdevs_discovered": 3, 00:14:40.964 "num_base_bdevs_operational": 3, 00:14:40.964 "process": { 00:14:40.964 "type": "rebuild", 00:14:40.964 "target": "spare", 00:14:40.964 "progress": { 00:14:40.964 "blocks": 34816, 00:14:40.964 "percent": 54 00:14:40.964 } 00:14:40.964 }, 00:14:40.964 "base_bdevs_list": [ 00:14:40.964 { 00:14:40.964 "name": "spare", 00:14:40.964 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:40.964 "is_configured": true, 00:14:40.964 "data_offset": 2048, 00:14:40.964 "data_size": 63488 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": null, 00:14:40.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.964 "is_configured": false, 00:14:40.964 "data_offset": 0, 00:14:40.964 "data_size": 63488 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": "BaseBdev3", 00:14:40.964 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:40.964 "is_configured": true, 00:14:40.964 "data_offset": 2048, 00:14:40.964 "data_size": 63488 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": "BaseBdev4", 00:14:40.965 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:40.965 "is_configured": true, 00:14:40.965 "data_offset": 2048, 00:14:40.965 "data_size": 63488 00:14:40.965 } 00:14:40.965 ] 00:14:40.965 }' 00:14:40.965 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:40.965 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:40.965 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:40.965 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:40.965 05:52:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:40.965 [2024-12-12 05:52:48.447900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:41.224 [2024-12-12 05:52:48.667739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:14:41.484 [2024-12-12 05:52:48.781562] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:42.003 103.50 IOPS, 310.50 MiB/s [2024-12-12T05:52:49.525Z] 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.003 "name": "raid_bdev1", 00:14:42.003 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:42.003 "strip_size_kb": 0, 00:14:42.003 "state": "online", 00:14:42.003 "raid_level": "raid1", 00:14:42.003 "superblock": true, 00:14:42.003 "num_base_bdevs": 4, 00:14:42.003 "num_base_bdevs_discovered": 3, 00:14:42.003 "num_base_bdevs_operational": 3, 00:14:42.003 "process": { 00:14:42.003 "type": "rebuild", 00:14:42.003 "target": "spare", 00:14:42.003 "progress": { 00:14:42.003 "blocks": 55296, 00:14:42.003 "percent": 87 00:14:42.003 } 00:14:42.003 }, 00:14:42.003 "base_bdevs_list": [ 00:14:42.003 { 00:14:42.003 "name": "spare", 00:14:42.003 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 2048, 00:14:42.003 "data_size": 63488 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": null, 00:14:42.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.003 "is_configured": false, 00:14:42.003 "data_offset": 0, 00:14:42.003 "data_size": 63488 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": "BaseBdev3", 00:14:42.003 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 2048, 00:14:42.003 "data_size": 63488 00:14:42.003 }, 00:14:42.003 { 00:14:42.003 "name": "BaseBdev4", 00:14:42.003 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:42.003 "is_configured": true, 00:14:42.003 "data_offset": 2048, 00:14:42.003 "data_size": 63488 00:14:42.003 } 00:14:42.003 ] 00:14:42.003 }' 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.003 05:52:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:42.263 [2024-12-12 05:52:49.771355] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:42.523 [2024-12-12 05:52:49.871133] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:42.523 [2024-12-12 05:52:49.873552] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.043 93.86 IOPS, 281.57 MiB/s [2024-12-12T05:52:50.565Z] 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.043 "name": "raid_bdev1", 00:14:43.043 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:43.043 "strip_size_kb": 0, 00:14:43.043 "state": "online", 00:14:43.043 "raid_level": "raid1", 00:14:43.043 "superblock": true, 00:14:43.043 "num_base_bdevs": 4, 00:14:43.043 "num_base_bdevs_discovered": 3, 00:14:43.043 "num_base_bdevs_operational": 3, 00:14:43.043 "base_bdevs_list": [ 00:14:43.043 { 00:14:43.043 "name": "spare", 00:14:43.043 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:43.043 "is_configured": true, 00:14:43.043 "data_offset": 2048, 00:14:43.043 "data_size": 63488 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "name": null, 00:14:43.043 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.043 "is_configured": false, 00:14:43.043 "data_offset": 0, 00:14:43.043 "data_size": 63488 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "name": "BaseBdev3", 00:14:43.043 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:43.043 "is_configured": true, 00:14:43.043 "data_offset": 2048, 00:14:43.043 "data_size": 63488 00:14:43.043 }, 00:14:43.043 { 00:14:43.043 "name": "BaseBdev4", 00:14:43.043 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:43.043 "is_configured": true, 00:14:43.043 "data_offset": 2048, 00:14:43.043 "data_size": 63488 00:14:43.043 } 00:14:43.043 ] 00:14:43.043 }' 00:14:43.043 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.303 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:43.303 "name": "raid_bdev1", 00:14:43.303 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:43.303 "strip_size_kb": 0, 00:14:43.303 "state": "online", 00:14:43.303 "raid_level": "raid1", 00:14:43.303 "superblock": true, 00:14:43.303 "num_base_bdevs": 4, 00:14:43.303 "num_base_bdevs_discovered": 3, 00:14:43.303 "num_base_bdevs_operational": 3, 00:14:43.303 "base_bdevs_list": [ 00:14:43.303 { 00:14:43.303 "name": "spare", 00:14:43.303 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:43.303 "is_configured": true, 00:14:43.303 "data_offset": 2048, 00:14:43.303 "data_size": 63488 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "name": null, 00:14:43.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.303 "is_configured": false, 00:14:43.303 "data_offset": 0, 00:14:43.303 "data_size": 63488 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "name": "BaseBdev3", 00:14:43.303 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:43.303 "is_configured": true, 00:14:43.303 "data_offset": 2048, 00:14:43.303 "data_size": 63488 00:14:43.303 }, 00:14:43.303 { 00:14:43.303 "name": "BaseBdev4", 00:14:43.303 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:43.303 "is_configured": true, 00:14:43.303 "data_offset": 2048, 00:14:43.303 "data_size": 63488 00:14:43.303 } 00:14:43.303 ] 00:14:43.304 }' 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:43.304 "name": "raid_bdev1", 00:14:43.304 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:43.304 "strip_size_kb": 0, 00:14:43.304 "state": "online", 00:14:43.304 "raid_level": "raid1", 00:14:43.304 "superblock": true, 00:14:43.304 "num_base_bdevs": 4, 00:14:43.304 "num_base_bdevs_discovered": 3, 00:14:43.304 "num_base_bdevs_operational": 3, 00:14:43.304 "base_bdevs_list": [ 00:14:43.304 { 00:14:43.304 "name": "spare", 00:14:43.304 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:43.304 "is_configured": true, 00:14:43.304 "data_offset": 2048, 00:14:43.304 "data_size": 63488 00:14:43.304 }, 00:14:43.304 { 00:14:43.304 "name": null, 00:14:43.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:43.304 "is_configured": false, 00:14:43.304 "data_offset": 0, 00:14:43.304 "data_size": 63488 00:14:43.304 }, 00:14:43.304 { 00:14:43.304 "name": "BaseBdev3", 00:14:43.304 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:43.304 "is_configured": true, 00:14:43.304 "data_offset": 2048, 00:14:43.304 "data_size": 63488 00:14:43.304 }, 00:14:43.304 { 00:14:43.304 "name": "BaseBdev4", 00:14:43.304 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:43.304 "is_configured": true, 00:14:43.304 "data_offset": 2048, 00:14:43.304 "data_size": 63488 00:14:43.304 } 00:14:43.304 ] 00:14:43.304 }' 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:43.304 05:52:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.874 86.50 IOPS, 259.50 MiB/s [2024-12-12T05:52:51.396Z] 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.874 [2024-12-12 05:52:51.135464] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:43.874 [2024-12-12 05:52:51.135493] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:43.874 00:14:43.874 Latency(us) 00:14:43.874 [2024-12-12T05:52:51.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.874 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:43.874 raid_bdev1 : 8.06 86.11 258.34 0.00 0.00 16276.27 296.92 115389.15 00:14:43.874 [2024-12-12T05:52:51.396Z] =================================================================================================================== 00:14:43.874 [2024-12-12T05:52:51.396Z] Total : 86.11 258.34 0.00 0.00 16276.27 296.92 115389.15 00:14:43.874 [2024-12-12 05:52:51.183723] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:43.874 [2024-12-12 05:52:51.183845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:43.874 [2024-12-12 05:52:51.183984] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:43.874 [2024-12-12 05:52:51.184033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:43.874 { 00:14:43.874 "results": [ 00:14:43.874 { 00:14:43.874 "job": "raid_bdev1", 00:14:43.874 "core_mask": "0x1", 00:14:43.874 "workload": "randrw", 00:14:43.874 "percentage": 50, 00:14:43.874 "status": "finished", 00:14:43.874 "queue_depth": 2, 00:14:43.874 "io_size": 3145728, 00:14:43.874 "runtime": 8.05901, 00:14:43.874 "iops": 86.11479573793804, 00:14:43.874 "mibps": 258.3443872138141, 00:14:43.874 "io_failed": 0, 00:14:43.874 "io_timeout": 0, 00:14:43.874 "avg_latency_us": 16276.268854687087, 00:14:43.874 "min_latency_us": 296.91528384279474, 00:14:43.874 "max_latency_us": 115389.14934497817 00:14:43.874 } 00:14:43.874 ], 00:14:43.874 "core_count": 1 00:14:43.874 } 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:43.874 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:44.134 /dev/nbd0 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.134 1+0 records in 00:14:44.134 1+0 records out 00:14:44.134 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000348249 s, 11.8 MB/s 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.134 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:14:44.134 /dev/nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.394 1+0 records in 00:14:44.394 1+0 records out 00:14:44.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382398 s, 10.7 MB/s 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.394 05:52:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.654 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:14:44.914 /dev/nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.914 1+0 records in 00:14:44.914 1+0 records out 00:14:44.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000551919 s, 7.4 MB/s 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.914 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:45.174 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:45.175 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:45.175 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:45.175 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:45.175 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.175 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 [2024-12-12 05:52:52.827146] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:45.435 [2024-12-12 05:52:52.827248] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.435 [2024-12-12 05:52:52.827289] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:14:45.435 [2024-12-12 05:52:52.827299] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.435 [2024-12-12 05:52:52.829495] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.435 [2024-12-12 05:52:52.829579] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:45.435 [2024-12-12 05:52:52.829685] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:45.435 [2024-12-12 05:52:52.829734] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:45.435 [2024-12-12 05:52:52.829875] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:45.435 [2024-12-12 05:52:52.829958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:45.435 spare 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.435 [2024-12-12 05:52:52.929844] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:45.435 [2024-12-12 05:52:52.929866] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:45.435 [2024-12-12 05:52:52.930127] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:14:45.435 [2024-12-12 05:52:52.930275] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:45.435 [2024-12-12 05:52:52.930287] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:45.435 [2024-12-12 05:52:52.930457] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.435 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.695 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.695 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.695 "name": "raid_bdev1", 00:14:45.695 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:45.695 "strip_size_kb": 0, 00:14:45.695 "state": "online", 00:14:45.695 "raid_level": "raid1", 00:14:45.695 "superblock": true, 00:14:45.695 "num_base_bdevs": 4, 00:14:45.695 "num_base_bdevs_discovered": 3, 00:14:45.695 "num_base_bdevs_operational": 3, 00:14:45.695 "base_bdevs_list": [ 00:14:45.695 { 00:14:45.695 "name": "spare", 00:14:45.695 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:45.695 "is_configured": true, 00:14:45.695 "data_offset": 2048, 00:14:45.695 "data_size": 63488 00:14:45.695 }, 00:14:45.695 { 00:14:45.695 "name": null, 00:14:45.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.695 "is_configured": false, 00:14:45.695 "data_offset": 2048, 00:14:45.695 "data_size": 63488 00:14:45.695 }, 00:14:45.695 { 00:14:45.695 "name": "BaseBdev3", 00:14:45.695 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:45.695 "is_configured": true, 00:14:45.695 "data_offset": 2048, 00:14:45.695 "data_size": 63488 00:14:45.695 }, 00:14:45.695 { 00:14:45.695 "name": "BaseBdev4", 00:14:45.695 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:45.695 "is_configured": true, 00:14:45.695 "data_offset": 2048, 00:14:45.695 "data_size": 63488 00:14:45.695 } 00:14:45.695 ] 00:14:45.695 }' 00:14:45.695 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.695 05:52:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.955 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.955 "name": "raid_bdev1", 00:14:45.955 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:45.955 "strip_size_kb": 0, 00:14:45.955 "state": "online", 00:14:45.955 "raid_level": "raid1", 00:14:45.955 "superblock": true, 00:14:45.955 "num_base_bdevs": 4, 00:14:45.955 "num_base_bdevs_discovered": 3, 00:14:45.955 "num_base_bdevs_operational": 3, 00:14:45.955 "base_bdevs_list": [ 00:14:45.955 { 00:14:45.955 "name": "spare", 00:14:45.955 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:45.955 "is_configured": true, 00:14:45.955 "data_offset": 2048, 00:14:45.955 "data_size": 63488 00:14:45.955 }, 00:14:45.955 { 00:14:45.955 "name": null, 00:14:45.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.955 "is_configured": false, 00:14:45.955 "data_offset": 2048, 00:14:45.955 "data_size": 63488 00:14:45.955 }, 00:14:45.955 { 00:14:45.955 "name": "BaseBdev3", 00:14:45.956 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:45.956 "is_configured": true, 00:14:45.956 "data_offset": 2048, 00:14:45.956 "data_size": 63488 00:14:45.956 }, 00:14:45.956 { 00:14:45.956 "name": "BaseBdev4", 00:14:45.956 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:45.956 "is_configured": true, 00:14:45.956 "data_offset": 2048, 00:14:45.956 "data_size": 63488 00:14:45.956 } 00:14:45.956 ] 00:14:45.956 }' 00:14:45.956 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.215 [2024-12-12 05:52:53.570245] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.215 "name": "raid_bdev1", 00:14:46.215 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:46.215 "strip_size_kb": 0, 00:14:46.215 "state": "online", 00:14:46.215 "raid_level": "raid1", 00:14:46.215 "superblock": true, 00:14:46.215 "num_base_bdevs": 4, 00:14:46.215 "num_base_bdevs_discovered": 2, 00:14:46.215 "num_base_bdevs_operational": 2, 00:14:46.215 "base_bdevs_list": [ 00:14:46.215 { 00:14:46.215 "name": null, 00:14:46.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.215 "is_configured": false, 00:14:46.215 "data_offset": 0, 00:14:46.215 "data_size": 63488 00:14:46.215 }, 00:14:46.215 { 00:14:46.215 "name": null, 00:14:46.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.215 "is_configured": false, 00:14:46.215 "data_offset": 2048, 00:14:46.215 "data_size": 63488 00:14:46.215 }, 00:14:46.215 { 00:14:46.215 "name": "BaseBdev3", 00:14:46.215 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:46.215 "is_configured": true, 00:14:46.215 "data_offset": 2048, 00:14:46.215 "data_size": 63488 00:14:46.215 }, 00:14:46.215 { 00:14:46.215 "name": "BaseBdev4", 00:14:46.215 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:46.215 "is_configured": true, 00:14:46.215 "data_offset": 2048, 00:14:46.215 "data_size": 63488 00:14:46.215 } 00:14:46.215 ] 00:14:46.215 }' 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.215 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.475 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.475 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.475 05:52:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:46.475 [2024-12-12 05:52:53.993645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.475 [2024-12-12 05:52:53.993901] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:46.475 [2024-12-12 05:52:53.993961] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:46.475 [2024-12-12 05:52:53.994035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.742 [2024-12-12 05:52:54.008745] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:14:46.742 05:52:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.742 05:52:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:46.743 [2024-12-12 05:52:54.010633] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.704 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.704 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.704 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.705 "name": "raid_bdev1", 00:14:47.705 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:47.705 "strip_size_kb": 0, 00:14:47.705 "state": "online", 00:14:47.705 "raid_level": "raid1", 00:14:47.705 "superblock": true, 00:14:47.705 "num_base_bdevs": 4, 00:14:47.705 "num_base_bdevs_discovered": 3, 00:14:47.705 "num_base_bdevs_operational": 3, 00:14:47.705 "process": { 00:14:47.705 "type": "rebuild", 00:14:47.705 "target": "spare", 00:14:47.705 "progress": { 00:14:47.705 "blocks": 20480, 00:14:47.705 "percent": 32 00:14:47.705 } 00:14:47.705 }, 00:14:47.705 "base_bdevs_list": [ 00:14:47.705 { 00:14:47.705 "name": "spare", 00:14:47.705 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:47.705 "is_configured": true, 00:14:47.705 "data_offset": 2048, 00:14:47.705 "data_size": 63488 00:14:47.705 }, 00:14:47.705 { 00:14:47.705 "name": null, 00:14:47.705 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.705 "is_configured": false, 00:14:47.705 "data_offset": 2048, 00:14:47.705 "data_size": 63488 00:14:47.705 }, 00:14:47.705 { 00:14:47.705 "name": "BaseBdev3", 00:14:47.705 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:47.705 "is_configured": true, 00:14:47.705 "data_offset": 2048, 00:14:47.705 "data_size": 63488 00:14:47.705 }, 00:14:47.705 { 00:14:47.705 "name": "BaseBdev4", 00:14:47.705 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:47.705 "is_configured": true, 00:14:47.705 "data_offset": 2048, 00:14:47.705 "data_size": 63488 00:14:47.705 } 00:14:47.705 ] 00:14:47.705 }' 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.705 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.705 [2024-12-12 05:52:55.170553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.705 [2024-12-12 05:52:55.215262] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:47.705 [2024-12-12 05:52:55.215318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:47.705 [2024-12-12 05:52:55.215336] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:47.705 [2024-12-12 05:52:55.215342] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:47.966 "name": "raid_bdev1", 00:14:47.966 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:47.966 "strip_size_kb": 0, 00:14:47.966 "state": "online", 00:14:47.966 "raid_level": "raid1", 00:14:47.966 "superblock": true, 00:14:47.966 "num_base_bdevs": 4, 00:14:47.966 "num_base_bdevs_discovered": 2, 00:14:47.966 "num_base_bdevs_operational": 2, 00:14:47.966 "base_bdevs_list": [ 00:14:47.966 { 00:14:47.966 "name": null, 00:14:47.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.966 "is_configured": false, 00:14:47.966 "data_offset": 0, 00:14:47.966 "data_size": 63488 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "name": null, 00:14:47.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.966 "is_configured": false, 00:14:47.966 "data_offset": 2048, 00:14:47.966 "data_size": 63488 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "name": "BaseBdev3", 00:14:47.966 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:47.966 "is_configured": true, 00:14:47.966 "data_offset": 2048, 00:14:47.966 "data_size": 63488 00:14:47.966 }, 00:14:47.966 { 00:14:47.966 "name": "BaseBdev4", 00:14:47.966 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:47.966 "is_configured": true, 00:14:47.966 "data_offset": 2048, 00:14:47.966 "data_size": 63488 00:14:47.966 } 00:14:47.966 ] 00:14:47.966 }' 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:47.966 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.226 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:48.226 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.226 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:48.226 [2024-12-12 05:52:55.650495] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:48.226 [2024-12-12 05:52:55.650619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:48.226 [2024-12-12 05:52:55.650663] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:14:48.226 [2024-12-12 05:52:55.650691] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:48.226 [2024-12-12 05:52:55.651206] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:48.226 [2024-12-12 05:52:55.651265] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:48.226 [2024-12-12 05:52:55.651394] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:48.226 [2024-12-12 05:52:55.651436] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:48.226 [2024-12-12 05:52:55.651485] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:48.226 [2024-12-12 05:52:55.651562] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:48.226 [2024-12-12 05:52:55.665801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:14:48.226 spare 00:14:48.226 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.226 05:52:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:48.226 [2024-12-12 05:52:55.667690] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.164 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.425 "name": "raid_bdev1", 00:14:49.425 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:49.425 "strip_size_kb": 0, 00:14:49.425 "state": "online", 00:14:49.425 "raid_level": "raid1", 00:14:49.425 "superblock": true, 00:14:49.425 "num_base_bdevs": 4, 00:14:49.425 "num_base_bdevs_discovered": 3, 00:14:49.425 "num_base_bdevs_operational": 3, 00:14:49.425 "process": { 00:14:49.425 "type": "rebuild", 00:14:49.425 "target": "spare", 00:14:49.425 "progress": { 00:14:49.425 "blocks": 20480, 00:14:49.425 "percent": 32 00:14:49.425 } 00:14:49.425 }, 00:14:49.425 "base_bdevs_list": [ 00:14:49.425 { 00:14:49.425 "name": "spare", 00:14:49.425 "uuid": "76dc61d7-c73d-5de8-80d1-38c7f60c30b2", 00:14:49.425 "is_configured": true, 00:14:49.425 "data_offset": 2048, 00:14:49.425 "data_size": 63488 00:14:49.425 }, 00:14:49.425 { 00:14:49.425 "name": null, 00:14:49.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.425 "is_configured": false, 00:14:49.425 "data_offset": 2048, 00:14:49.425 "data_size": 63488 00:14:49.425 }, 00:14:49.425 { 00:14:49.425 "name": "BaseBdev3", 00:14:49.425 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:49.425 "is_configured": true, 00:14:49.425 "data_offset": 2048, 00:14:49.425 "data_size": 63488 00:14:49.425 }, 00:14:49.425 { 00:14:49.425 "name": "BaseBdev4", 00:14:49.425 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:49.425 "is_configured": true, 00:14:49.425 "data_offset": 2048, 00:14:49.425 "data_size": 63488 00:14:49.425 } 00:14:49.425 ] 00:14:49.425 }' 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.425 [2024-12-12 05:52:56.831527] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.425 [2024-12-12 05:52:56.872346] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:49.425 [2024-12-12 05:52:56.872447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.425 [2024-12-12 05:52:56.872464] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:49.425 [2024-12-12 05:52:56.872475] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.425 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.685 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:49.685 "name": "raid_bdev1", 00:14:49.685 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:49.685 "strip_size_kb": 0, 00:14:49.685 "state": "online", 00:14:49.685 "raid_level": "raid1", 00:14:49.685 "superblock": true, 00:14:49.685 "num_base_bdevs": 4, 00:14:49.685 "num_base_bdevs_discovered": 2, 00:14:49.685 "num_base_bdevs_operational": 2, 00:14:49.685 "base_bdevs_list": [ 00:14:49.685 { 00:14:49.685 "name": null, 00:14:49.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.685 "is_configured": false, 00:14:49.685 "data_offset": 0, 00:14:49.685 "data_size": 63488 00:14:49.685 }, 00:14:49.685 { 00:14:49.685 "name": null, 00:14:49.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.685 "is_configured": false, 00:14:49.685 "data_offset": 2048, 00:14:49.685 "data_size": 63488 00:14:49.685 }, 00:14:49.685 { 00:14:49.685 "name": "BaseBdev3", 00:14:49.685 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:49.685 "is_configured": true, 00:14:49.685 "data_offset": 2048, 00:14:49.685 "data_size": 63488 00:14:49.685 }, 00:14:49.685 { 00:14:49.685 "name": "BaseBdev4", 00:14:49.685 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:49.685 "is_configured": true, 00:14:49.685 "data_offset": 2048, 00:14:49.685 "data_size": 63488 00:14:49.685 } 00:14:49.685 ] 00:14:49.685 }' 00:14:49.685 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:49.685 05:52:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:49.948 "name": "raid_bdev1", 00:14:49.948 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:49.948 "strip_size_kb": 0, 00:14:49.948 "state": "online", 00:14:49.948 "raid_level": "raid1", 00:14:49.948 "superblock": true, 00:14:49.948 "num_base_bdevs": 4, 00:14:49.948 "num_base_bdevs_discovered": 2, 00:14:49.948 "num_base_bdevs_operational": 2, 00:14:49.948 "base_bdevs_list": [ 00:14:49.948 { 00:14:49.948 "name": null, 00:14:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.948 "is_configured": false, 00:14:49.948 "data_offset": 0, 00:14:49.948 "data_size": 63488 00:14:49.948 }, 00:14:49.948 { 00:14:49.948 "name": null, 00:14:49.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.948 "is_configured": false, 00:14:49.948 "data_offset": 2048, 00:14:49.948 "data_size": 63488 00:14:49.948 }, 00:14:49.948 { 00:14:49.948 "name": "BaseBdev3", 00:14:49.948 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:49.948 "is_configured": true, 00:14:49.948 "data_offset": 2048, 00:14:49.948 "data_size": 63488 00:14:49.948 }, 00:14:49.948 { 00:14:49.948 "name": "BaseBdev4", 00:14:49.948 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:49.948 "is_configured": true, 00:14:49.948 "data_offset": 2048, 00:14:49.948 "data_size": 63488 00:14:49.948 } 00:14:49.948 ] 00:14:49.948 }' 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:49.948 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.208 [2024-12-12 05:52:57.519117] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:50.208 [2024-12-12 05:52:57.519177] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:50.208 [2024-12-12 05:52:57.519196] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:14:50.208 [2024-12-12 05:52:57.519206] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:50.208 [2024-12-12 05:52:57.519644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:50.208 [2024-12-12 05:52:57.519669] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:50.208 [2024-12-12 05:52:57.519759] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:50.208 [2024-12-12 05:52:57.519781] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:50.208 [2024-12-12 05:52:57.519789] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:50.208 [2024-12-12 05:52:57.519803] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:50.208 BaseBdev1 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.208 05:52:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.149 "name": "raid_bdev1", 00:14:51.149 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:51.149 "strip_size_kb": 0, 00:14:51.149 "state": "online", 00:14:51.149 "raid_level": "raid1", 00:14:51.149 "superblock": true, 00:14:51.149 "num_base_bdevs": 4, 00:14:51.149 "num_base_bdevs_discovered": 2, 00:14:51.149 "num_base_bdevs_operational": 2, 00:14:51.149 "base_bdevs_list": [ 00:14:51.149 { 00:14:51.149 "name": null, 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 0, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": null, 00:14:51.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.149 "is_configured": false, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": "BaseBdev3", 00:14:51.149 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:51.149 "is_configured": true, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 }, 00:14:51.149 { 00:14:51.149 "name": "BaseBdev4", 00:14:51.149 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:51.149 "is_configured": true, 00:14:51.149 "data_offset": 2048, 00:14:51.149 "data_size": 63488 00:14:51.149 } 00:14:51.149 ] 00:14:51.149 }' 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.149 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:51.719 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:51.720 "name": "raid_bdev1", 00:14:51.720 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:51.720 "strip_size_kb": 0, 00:14:51.720 "state": "online", 00:14:51.720 "raid_level": "raid1", 00:14:51.720 "superblock": true, 00:14:51.720 "num_base_bdevs": 4, 00:14:51.720 "num_base_bdevs_discovered": 2, 00:14:51.720 "num_base_bdevs_operational": 2, 00:14:51.720 "base_bdevs_list": [ 00:14:51.720 { 00:14:51.720 "name": null, 00:14:51.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.720 "is_configured": false, 00:14:51.720 "data_offset": 0, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": null, 00:14:51.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.720 "is_configured": false, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": "BaseBdev3", 00:14:51.720 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:51.720 "is_configured": true, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 }, 00:14:51.720 { 00:14:51.720 "name": "BaseBdev4", 00:14:51.720 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:51.720 "is_configured": true, 00:14:51.720 "data_offset": 2048, 00:14:51.720 "data_size": 63488 00:14:51.720 } 00:14:51.720 ] 00:14:51.720 }' 00:14:51.720 05:52:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.720 [2024-12-12 05:52:59.072698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.720 [2024-12-12 05:52:59.072926] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:51.720 [2024-12-12 05:52:59.072943] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:51.720 request: 00:14:51.720 { 00:14:51.720 "base_bdev": "BaseBdev1", 00:14:51.720 "raid_bdev": "raid_bdev1", 00:14:51.720 "method": "bdev_raid_add_base_bdev", 00:14:51.720 "req_id": 1 00:14:51.720 } 00:14:51.720 Got JSON-RPC error response 00:14:51.720 response: 00:14:51.720 { 00:14:51.720 "code": -22, 00:14:51.720 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:51.720 } 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:51.720 05:52:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.659 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.659 "name": "raid_bdev1", 00:14:52.659 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:52.659 "strip_size_kb": 0, 00:14:52.659 "state": "online", 00:14:52.659 "raid_level": "raid1", 00:14:52.659 "superblock": true, 00:14:52.659 "num_base_bdevs": 4, 00:14:52.659 "num_base_bdevs_discovered": 2, 00:14:52.659 "num_base_bdevs_operational": 2, 00:14:52.659 "base_bdevs_list": [ 00:14:52.660 { 00:14:52.660 "name": null, 00:14:52.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.660 "is_configured": false, 00:14:52.660 "data_offset": 0, 00:14:52.660 "data_size": 63488 00:14:52.660 }, 00:14:52.660 { 00:14:52.660 "name": null, 00:14:52.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.660 "is_configured": false, 00:14:52.660 "data_offset": 2048, 00:14:52.660 "data_size": 63488 00:14:52.660 }, 00:14:52.660 { 00:14:52.660 "name": "BaseBdev3", 00:14:52.660 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:52.660 "is_configured": true, 00:14:52.660 "data_offset": 2048, 00:14:52.660 "data_size": 63488 00:14:52.660 }, 00:14:52.660 { 00:14:52.660 "name": "BaseBdev4", 00:14:52.660 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:52.660 "is_configured": true, 00:14:52.660 "data_offset": 2048, 00:14:52.660 "data_size": 63488 00:14:52.660 } 00:14:52.660 ] 00:14:52.660 }' 00:14:52.660 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.660 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.230 "name": "raid_bdev1", 00:14:53.230 "uuid": "95e7f209-4218-4b3f-bbd4-a473956e86c8", 00:14:53.230 "strip_size_kb": 0, 00:14:53.230 "state": "online", 00:14:53.230 "raid_level": "raid1", 00:14:53.230 "superblock": true, 00:14:53.230 "num_base_bdevs": 4, 00:14:53.230 "num_base_bdevs_discovered": 2, 00:14:53.230 "num_base_bdevs_operational": 2, 00:14:53.230 "base_bdevs_list": [ 00:14:53.230 { 00:14:53.230 "name": null, 00:14:53.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.230 "is_configured": false, 00:14:53.230 "data_offset": 0, 00:14:53.230 "data_size": 63488 00:14:53.230 }, 00:14:53.230 { 00:14:53.230 "name": null, 00:14:53.230 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.230 "is_configured": false, 00:14:53.230 "data_offset": 2048, 00:14:53.230 "data_size": 63488 00:14:53.230 }, 00:14:53.230 { 00:14:53.230 "name": "BaseBdev3", 00:14:53.230 "uuid": "ea160045-ddbd-5ed0-a761-b8a3f01b1cce", 00:14:53.230 "is_configured": true, 00:14:53.230 "data_offset": 2048, 00:14:53.230 "data_size": 63488 00:14:53.230 }, 00:14:53.230 { 00:14:53.230 "name": "BaseBdev4", 00:14:53.230 "uuid": "8eff1df3-2cbb-5859-8ba8-8a293942bd6e", 00:14:53.230 "is_configured": true, 00:14:53.230 "data_offset": 2048, 00:14:53.230 "data_size": 63488 00:14:53.230 } 00:14:53.230 ] 00:14:53.230 }' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79690 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79690 ']' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79690 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79690 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.230 killing process with pid 79690 00:14:53.230 Received shutdown signal, test time was about 17.619308 seconds 00:14:53.230 00:14:53.230 Latency(us) 00:14:53.230 [2024-12-12T05:53:00.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.230 [2024-12-12T05:53:00.752Z] =================================================================================================================== 00:14:53.230 [2024-12-12T05:53:00.752Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79690' 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79690 00:14:53.230 [2024-12-12 05:53:00.705232] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:53.230 [2024-12-12 05:53:00.705347] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:53.230 05:53:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79690 00:14:53.230 [2024-12-12 05:53:00.705418] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:53.230 [2024-12-12 05:53:00.705427] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:53.800 [2024-12-12 05:53:01.093075] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:54.739 05:53:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:54.739 00:14:54.739 real 0m20.925s 00:14:54.739 user 0m27.277s 00:14:54.739 sys 0m2.495s 00:14:54.739 05:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:54.739 ************************************ 00:14:54.739 END TEST raid_rebuild_test_sb_io 00:14:54.739 ************************************ 00:14:54.739 05:53:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.739 05:53:02 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:14:54.739 05:53:02 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:14:54.739 05:53:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:14:54.739 05:53:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:54.739 05:53:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:54.999 ************************************ 00:14:54.999 START TEST raid5f_state_function_test 00:14:54.999 ************************************ 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80286 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80286' 00:14:54.999 Process raid pid: 80286 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80286 00:14:54.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80286 ']' 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:54.999 05:53:02 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:54.999 [2024-12-12 05:53:02.356363] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:14:54.999 [2024-12-12 05:53:02.356492] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.259 [2024-12-12 05:53:02.530585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.259 [2024-12-12 05:53:02.641245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.519 [2024-12-12 05:53:02.829096] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.519 [2024-12-12 05:53:02.829131] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.779 [2024-12-12 05:53:03.168882] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:55.779 [2024-12-12 05:53:03.168935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:55.779 [2024-12-12 05:53:03.168946] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:55.779 [2024-12-12 05:53:03.168971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:55.779 [2024-12-12 05:53:03.168977] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:55.779 [2024-12-12 05:53:03.168986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:55.779 "name": "Existed_Raid", 00:14:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.779 "strip_size_kb": 64, 00:14:55.779 "state": "configuring", 00:14:55.779 "raid_level": "raid5f", 00:14:55.779 "superblock": false, 00:14:55.779 "num_base_bdevs": 3, 00:14:55.779 "num_base_bdevs_discovered": 0, 00:14:55.779 "num_base_bdevs_operational": 3, 00:14:55.779 "base_bdevs_list": [ 00:14:55.779 { 00:14:55.779 "name": "BaseBdev1", 00:14:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.779 "is_configured": false, 00:14:55.779 "data_offset": 0, 00:14:55.779 "data_size": 0 00:14:55.779 }, 00:14:55.779 { 00:14:55.779 "name": "BaseBdev2", 00:14:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.779 "is_configured": false, 00:14:55.779 "data_offset": 0, 00:14:55.779 "data_size": 0 00:14:55.779 }, 00:14:55.779 { 00:14:55.779 "name": "BaseBdev3", 00:14:55.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.779 "is_configured": false, 00:14:55.779 "data_offset": 0, 00:14:55.779 "data_size": 0 00:14:55.779 } 00:14:55.779 ] 00:14:55.779 }' 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:55.779 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 [2024-12-12 05:53:03.596069] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.349 [2024-12-12 05:53:03.596148] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 [2024-12-12 05:53:03.608062] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:56.349 [2024-12-12 05:53:03.608142] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:56.349 [2024-12-12 05:53:03.608169] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.349 [2024-12-12 05:53:03.608190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.349 [2024-12-12 05:53:03.608208] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.349 [2024-12-12 05:53:03.608227] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 [2024-12-12 05:53:03.654752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.349 BaseBdev1 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.349 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.349 [ 00:14:56.349 { 00:14:56.349 "name": "BaseBdev1", 00:14:56.349 "aliases": [ 00:14:56.349 "47707539-4926-4db7-9737-7b9e39987e4f" 00:14:56.349 ], 00:14:56.349 "product_name": "Malloc disk", 00:14:56.349 "block_size": 512, 00:14:56.349 "num_blocks": 65536, 00:14:56.349 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:56.349 "assigned_rate_limits": { 00:14:56.349 "rw_ios_per_sec": 0, 00:14:56.349 "rw_mbytes_per_sec": 0, 00:14:56.349 "r_mbytes_per_sec": 0, 00:14:56.349 "w_mbytes_per_sec": 0 00:14:56.349 }, 00:14:56.349 "claimed": true, 00:14:56.350 "claim_type": "exclusive_write", 00:14:56.350 "zoned": false, 00:14:56.350 "supported_io_types": { 00:14:56.350 "read": true, 00:14:56.350 "write": true, 00:14:56.350 "unmap": true, 00:14:56.350 "flush": true, 00:14:56.350 "reset": true, 00:14:56.350 "nvme_admin": false, 00:14:56.350 "nvme_io": false, 00:14:56.350 "nvme_io_md": false, 00:14:56.350 "write_zeroes": true, 00:14:56.350 "zcopy": true, 00:14:56.350 "get_zone_info": false, 00:14:56.350 "zone_management": false, 00:14:56.350 "zone_append": false, 00:14:56.350 "compare": false, 00:14:56.350 "compare_and_write": false, 00:14:56.350 "abort": true, 00:14:56.350 "seek_hole": false, 00:14:56.350 "seek_data": false, 00:14:56.350 "copy": true, 00:14:56.350 "nvme_iov_md": false 00:14:56.350 }, 00:14:56.350 "memory_domains": [ 00:14:56.350 { 00:14:56.350 "dma_device_id": "system", 00:14:56.350 "dma_device_type": 1 00:14:56.350 }, 00:14:56.350 { 00:14:56.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.350 "dma_device_type": 2 00:14:56.350 } 00:14:56.350 ], 00:14:56.350 "driver_specific": {} 00:14:56.350 } 00:14:56.350 ] 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.350 "name": "Existed_Raid", 00:14:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.350 "strip_size_kb": 64, 00:14:56.350 "state": "configuring", 00:14:56.350 "raid_level": "raid5f", 00:14:56.350 "superblock": false, 00:14:56.350 "num_base_bdevs": 3, 00:14:56.350 "num_base_bdevs_discovered": 1, 00:14:56.350 "num_base_bdevs_operational": 3, 00:14:56.350 "base_bdevs_list": [ 00:14:56.350 { 00:14:56.350 "name": "BaseBdev1", 00:14:56.350 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:56.350 "is_configured": true, 00:14:56.350 "data_offset": 0, 00:14:56.350 "data_size": 65536 00:14:56.350 }, 00:14:56.350 { 00:14:56.350 "name": "BaseBdev2", 00:14:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.350 "is_configured": false, 00:14:56.350 "data_offset": 0, 00:14:56.350 "data_size": 0 00:14:56.350 }, 00:14:56.350 { 00:14:56.350 "name": "BaseBdev3", 00:14:56.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.350 "is_configured": false, 00:14:56.350 "data_offset": 0, 00:14:56.350 "data_size": 0 00:14:56.350 } 00:14:56.350 ] 00:14:56.350 }' 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.350 05:53:03 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.610 [2024-12-12 05:53:04.110177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.610 [2024-12-12 05:53:04.110274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.610 [2024-12-12 05:53:04.122213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:56.610 [2024-12-12 05:53:04.124014] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:56.610 [2024-12-12 05:53:04.124092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:56.610 [2024-12-12 05:53:04.124122] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:56.610 [2024-12-12 05:53:04.124144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:56.610 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.870 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.870 "name": "Existed_Raid", 00:14:56.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.870 "strip_size_kb": 64, 00:14:56.870 "state": "configuring", 00:14:56.870 "raid_level": "raid5f", 00:14:56.870 "superblock": false, 00:14:56.870 "num_base_bdevs": 3, 00:14:56.870 "num_base_bdevs_discovered": 1, 00:14:56.870 "num_base_bdevs_operational": 3, 00:14:56.870 "base_bdevs_list": [ 00:14:56.870 { 00:14:56.870 "name": "BaseBdev1", 00:14:56.870 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:56.870 "is_configured": true, 00:14:56.870 "data_offset": 0, 00:14:56.871 "data_size": 65536 00:14:56.871 }, 00:14:56.871 { 00:14:56.871 "name": "BaseBdev2", 00:14:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.871 "is_configured": false, 00:14:56.871 "data_offset": 0, 00:14:56.871 "data_size": 0 00:14:56.871 }, 00:14:56.871 { 00:14:56.871 "name": "BaseBdev3", 00:14:56.871 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.871 "is_configured": false, 00:14:56.871 "data_offset": 0, 00:14:56.871 "data_size": 0 00:14:56.871 } 00:14:56.871 ] 00:14:56.871 }' 00:14:56.871 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.871 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 [2024-12-12 05:53:04.571538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.133 BaseBdev2 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 [ 00:14:57.133 { 00:14:57.133 "name": "BaseBdev2", 00:14:57.133 "aliases": [ 00:14:57.133 "0fa511f4-81b1-41bc-be0e-0aada622eae9" 00:14:57.133 ], 00:14:57.133 "product_name": "Malloc disk", 00:14:57.133 "block_size": 512, 00:14:57.133 "num_blocks": 65536, 00:14:57.133 "uuid": "0fa511f4-81b1-41bc-be0e-0aada622eae9", 00:14:57.133 "assigned_rate_limits": { 00:14:57.133 "rw_ios_per_sec": 0, 00:14:57.133 "rw_mbytes_per_sec": 0, 00:14:57.133 "r_mbytes_per_sec": 0, 00:14:57.133 "w_mbytes_per_sec": 0 00:14:57.133 }, 00:14:57.133 "claimed": true, 00:14:57.133 "claim_type": "exclusive_write", 00:14:57.133 "zoned": false, 00:14:57.133 "supported_io_types": { 00:14:57.133 "read": true, 00:14:57.133 "write": true, 00:14:57.133 "unmap": true, 00:14:57.133 "flush": true, 00:14:57.133 "reset": true, 00:14:57.133 "nvme_admin": false, 00:14:57.133 "nvme_io": false, 00:14:57.133 "nvme_io_md": false, 00:14:57.133 "write_zeroes": true, 00:14:57.133 "zcopy": true, 00:14:57.133 "get_zone_info": false, 00:14:57.133 "zone_management": false, 00:14:57.133 "zone_append": false, 00:14:57.133 "compare": false, 00:14:57.133 "compare_and_write": false, 00:14:57.133 "abort": true, 00:14:57.133 "seek_hole": false, 00:14:57.133 "seek_data": false, 00:14:57.133 "copy": true, 00:14:57.133 "nvme_iov_md": false 00:14:57.133 }, 00:14:57.133 "memory_domains": [ 00:14:57.133 { 00:14:57.133 "dma_device_id": "system", 00:14:57.133 "dma_device_type": 1 00:14:57.133 }, 00:14:57.133 { 00:14:57.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.133 "dma_device_type": 2 00:14:57.133 } 00:14:57.133 ], 00:14:57.133 "driver_specific": {} 00:14:57.133 } 00:14:57.133 ] 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.133 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.401 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.401 "name": "Existed_Raid", 00:14:57.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.401 "strip_size_kb": 64, 00:14:57.401 "state": "configuring", 00:14:57.401 "raid_level": "raid5f", 00:14:57.401 "superblock": false, 00:14:57.401 "num_base_bdevs": 3, 00:14:57.401 "num_base_bdevs_discovered": 2, 00:14:57.401 "num_base_bdevs_operational": 3, 00:14:57.401 "base_bdevs_list": [ 00:14:57.401 { 00:14:57.401 "name": "BaseBdev1", 00:14:57.401 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:57.401 "is_configured": true, 00:14:57.401 "data_offset": 0, 00:14:57.401 "data_size": 65536 00:14:57.401 }, 00:14:57.401 { 00:14:57.401 "name": "BaseBdev2", 00:14:57.401 "uuid": "0fa511f4-81b1-41bc-be0e-0aada622eae9", 00:14:57.401 "is_configured": true, 00:14:57.401 "data_offset": 0, 00:14:57.401 "data_size": 65536 00:14:57.401 }, 00:14:57.401 { 00:14:57.401 "name": "BaseBdev3", 00:14:57.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.401 "is_configured": false, 00:14:57.401 "data_offset": 0, 00:14:57.401 "data_size": 0 00:14:57.401 } 00:14:57.401 ] 00:14:57.401 }' 00:14:57.401 05:53:04 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.401 05:53:04 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.708 [2024-12-12 05:53:05.155222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:57.708 [2024-12-12 05:53:05.155279] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:14:57.708 [2024-12-12 05:53:05.155294] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:14:57.708 [2024-12-12 05:53:05.155563] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:57.708 [2024-12-12 05:53:05.160644] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:14:57.708 [2024-12-12 05:53:05.160706] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:14:57.708 [2024-12-12 05:53:05.161036] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:57.708 BaseBdev3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.708 [ 00:14:57.708 { 00:14:57.708 "name": "BaseBdev3", 00:14:57.708 "aliases": [ 00:14:57.708 "0713eb84-c85c-421e-8261-61bf7cef5746" 00:14:57.708 ], 00:14:57.708 "product_name": "Malloc disk", 00:14:57.708 "block_size": 512, 00:14:57.708 "num_blocks": 65536, 00:14:57.708 "uuid": "0713eb84-c85c-421e-8261-61bf7cef5746", 00:14:57.708 "assigned_rate_limits": { 00:14:57.708 "rw_ios_per_sec": 0, 00:14:57.708 "rw_mbytes_per_sec": 0, 00:14:57.708 "r_mbytes_per_sec": 0, 00:14:57.708 "w_mbytes_per_sec": 0 00:14:57.708 }, 00:14:57.708 "claimed": true, 00:14:57.708 "claim_type": "exclusive_write", 00:14:57.708 "zoned": false, 00:14:57.708 "supported_io_types": { 00:14:57.708 "read": true, 00:14:57.708 "write": true, 00:14:57.708 "unmap": true, 00:14:57.708 "flush": true, 00:14:57.708 "reset": true, 00:14:57.708 "nvme_admin": false, 00:14:57.708 "nvme_io": false, 00:14:57.708 "nvme_io_md": false, 00:14:57.708 "write_zeroes": true, 00:14:57.708 "zcopy": true, 00:14:57.708 "get_zone_info": false, 00:14:57.708 "zone_management": false, 00:14:57.708 "zone_append": false, 00:14:57.708 "compare": false, 00:14:57.708 "compare_and_write": false, 00:14:57.708 "abort": true, 00:14:57.708 "seek_hole": false, 00:14:57.708 "seek_data": false, 00:14:57.708 "copy": true, 00:14:57.708 "nvme_iov_md": false 00:14:57.708 }, 00:14:57.708 "memory_domains": [ 00:14:57.708 { 00:14:57.708 "dma_device_id": "system", 00:14:57.708 "dma_device_type": 1 00:14:57.708 }, 00:14:57.708 { 00:14:57.708 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:57.708 "dma_device_type": 2 00:14:57.708 } 00:14:57.708 ], 00:14:57.708 "driver_specific": {} 00:14:57.708 } 00:14:57.708 ] 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.708 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.967 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.967 "name": "Existed_Raid", 00:14:57.967 "uuid": "b799e21a-831d-43a1-bb03-ffefe3da6fe2", 00:14:57.967 "strip_size_kb": 64, 00:14:57.967 "state": "online", 00:14:57.967 "raid_level": "raid5f", 00:14:57.967 "superblock": false, 00:14:57.967 "num_base_bdevs": 3, 00:14:57.967 "num_base_bdevs_discovered": 3, 00:14:57.967 "num_base_bdevs_operational": 3, 00:14:57.967 "base_bdevs_list": [ 00:14:57.967 { 00:14:57.967 "name": "BaseBdev1", 00:14:57.967 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:57.967 "is_configured": true, 00:14:57.967 "data_offset": 0, 00:14:57.967 "data_size": 65536 00:14:57.967 }, 00:14:57.967 { 00:14:57.967 "name": "BaseBdev2", 00:14:57.967 "uuid": "0fa511f4-81b1-41bc-be0e-0aada622eae9", 00:14:57.967 "is_configured": true, 00:14:57.967 "data_offset": 0, 00:14:57.967 "data_size": 65536 00:14:57.967 }, 00:14:57.967 { 00:14:57.967 "name": "BaseBdev3", 00:14:57.967 "uuid": "0713eb84-c85c-421e-8261-61bf7cef5746", 00:14:57.967 "is_configured": true, 00:14:57.967 "data_offset": 0, 00:14:57.967 "data_size": 65536 00:14:57.967 } 00:14:57.967 ] 00:14:57.967 }' 00:14:57.967 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.967 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:14:58.227 [2024-12-12 05:53:05.650308] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:14:58.227 "name": "Existed_Raid", 00:14:58.227 "aliases": [ 00:14:58.227 "b799e21a-831d-43a1-bb03-ffefe3da6fe2" 00:14:58.227 ], 00:14:58.227 "product_name": "Raid Volume", 00:14:58.227 "block_size": 512, 00:14:58.227 "num_blocks": 131072, 00:14:58.227 "uuid": "b799e21a-831d-43a1-bb03-ffefe3da6fe2", 00:14:58.227 "assigned_rate_limits": { 00:14:58.227 "rw_ios_per_sec": 0, 00:14:58.227 "rw_mbytes_per_sec": 0, 00:14:58.227 "r_mbytes_per_sec": 0, 00:14:58.227 "w_mbytes_per_sec": 0 00:14:58.227 }, 00:14:58.227 "claimed": false, 00:14:58.227 "zoned": false, 00:14:58.227 "supported_io_types": { 00:14:58.227 "read": true, 00:14:58.227 "write": true, 00:14:58.227 "unmap": false, 00:14:58.227 "flush": false, 00:14:58.227 "reset": true, 00:14:58.227 "nvme_admin": false, 00:14:58.227 "nvme_io": false, 00:14:58.227 "nvme_io_md": false, 00:14:58.227 "write_zeroes": true, 00:14:58.227 "zcopy": false, 00:14:58.227 "get_zone_info": false, 00:14:58.227 "zone_management": false, 00:14:58.227 "zone_append": false, 00:14:58.227 "compare": false, 00:14:58.227 "compare_and_write": false, 00:14:58.227 "abort": false, 00:14:58.227 "seek_hole": false, 00:14:58.227 "seek_data": false, 00:14:58.227 "copy": false, 00:14:58.227 "nvme_iov_md": false 00:14:58.227 }, 00:14:58.227 "driver_specific": { 00:14:58.227 "raid": { 00:14:58.227 "uuid": "b799e21a-831d-43a1-bb03-ffefe3da6fe2", 00:14:58.227 "strip_size_kb": 64, 00:14:58.227 "state": "online", 00:14:58.227 "raid_level": "raid5f", 00:14:58.227 "superblock": false, 00:14:58.227 "num_base_bdevs": 3, 00:14:58.227 "num_base_bdevs_discovered": 3, 00:14:58.227 "num_base_bdevs_operational": 3, 00:14:58.227 "base_bdevs_list": [ 00:14:58.227 { 00:14:58.227 "name": "BaseBdev1", 00:14:58.227 "uuid": "47707539-4926-4db7-9737-7b9e39987e4f", 00:14:58.227 "is_configured": true, 00:14:58.227 "data_offset": 0, 00:14:58.227 "data_size": 65536 00:14:58.227 }, 00:14:58.227 { 00:14:58.227 "name": "BaseBdev2", 00:14:58.227 "uuid": "0fa511f4-81b1-41bc-be0e-0aada622eae9", 00:14:58.227 "is_configured": true, 00:14:58.227 "data_offset": 0, 00:14:58.227 "data_size": 65536 00:14:58.227 }, 00:14:58.227 { 00:14:58.227 "name": "BaseBdev3", 00:14:58.227 "uuid": "0713eb84-c85c-421e-8261-61bf7cef5746", 00:14:58.227 "is_configured": true, 00:14:58.227 "data_offset": 0, 00:14:58.227 "data_size": 65536 00:14:58.227 } 00:14:58.227 ] 00:14:58.227 } 00:14:58.227 } 00:14:58.227 }' 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:14:58.227 BaseBdev2 00:14:58.227 BaseBdev3' 00:14:58.227 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.487 05:53:05 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.487 [2024-12-12 05:53:05.933708] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:58.747 "name": "Existed_Raid", 00:14:58.747 "uuid": "b799e21a-831d-43a1-bb03-ffefe3da6fe2", 00:14:58.747 "strip_size_kb": 64, 00:14:58.747 "state": "online", 00:14:58.747 "raid_level": "raid5f", 00:14:58.747 "superblock": false, 00:14:58.747 "num_base_bdevs": 3, 00:14:58.747 "num_base_bdevs_discovered": 2, 00:14:58.747 "num_base_bdevs_operational": 2, 00:14:58.747 "base_bdevs_list": [ 00:14:58.747 { 00:14:58.747 "name": null, 00:14:58.747 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.747 "is_configured": false, 00:14:58.747 "data_offset": 0, 00:14:58.747 "data_size": 65536 00:14:58.747 }, 00:14:58.747 { 00:14:58.747 "name": "BaseBdev2", 00:14:58.747 "uuid": "0fa511f4-81b1-41bc-be0e-0aada622eae9", 00:14:58.747 "is_configured": true, 00:14:58.747 "data_offset": 0, 00:14:58.747 "data_size": 65536 00:14:58.747 }, 00:14:58.747 { 00:14:58.747 "name": "BaseBdev3", 00:14:58.747 "uuid": "0713eb84-c85c-421e-8261-61bf7cef5746", 00:14:58.747 "is_configured": true, 00:14:58.747 "data_offset": 0, 00:14:58.747 "data_size": 65536 00:14:58.747 } 00:14:58.747 ] 00:14:58.747 }' 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:58.747 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.007 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.007 [2024-12-12 05:53:06.466779] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:59.007 [2024-12-12 05:53:06.466875] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:59.267 [2024-12-12 05:53:06.556835] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.267 [2024-12-12 05:53:06.616749] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:59.267 [2024-12-12 05:53:06.616795] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.267 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.527 BaseBdev2 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.527 [ 00:14:59.527 { 00:14:59.527 "name": "BaseBdev2", 00:14:59.527 "aliases": [ 00:14:59.527 "75f6393e-1b0d-4b58-bea8-9cda4a9dd730" 00:14:59.527 ], 00:14:59.527 "product_name": "Malloc disk", 00:14:59.527 "block_size": 512, 00:14:59.527 "num_blocks": 65536, 00:14:59.527 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:14:59.527 "assigned_rate_limits": { 00:14:59.527 "rw_ios_per_sec": 0, 00:14:59.527 "rw_mbytes_per_sec": 0, 00:14:59.527 "r_mbytes_per_sec": 0, 00:14:59.527 "w_mbytes_per_sec": 0 00:14:59.527 }, 00:14:59.527 "claimed": false, 00:14:59.527 "zoned": false, 00:14:59.527 "supported_io_types": { 00:14:59.527 "read": true, 00:14:59.527 "write": true, 00:14:59.527 "unmap": true, 00:14:59.527 "flush": true, 00:14:59.527 "reset": true, 00:14:59.527 "nvme_admin": false, 00:14:59.527 "nvme_io": false, 00:14:59.527 "nvme_io_md": false, 00:14:59.527 "write_zeroes": true, 00:14:59.527 "zcopy": true, 00:14:59.527 "get_zone_info": false, 00:14:59.527 "zone_management": false, 00:14:59.527 "zone_append": false, 00:14:59.527 "compare": false, 00:14:59.527 "compare_and_write": false, 00:14:59.527 "abort": true, 00:14:59.527 "seek_hole": false, 00:14:59.527 "seek_data": false, 00:14:59.527 "copy": true, 00:14:59.527 "nvme_iov_md": false 00:14:59.527 }, 00:14:59.527 "memory_domains": [ 00:14:59.527 { 00:14:59.527 "dma_device_id": "system", 00:14:59.527 "dma_device_type": 1 00:14:59.527 }, 00:14:59.527 { 00:14:59.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.527 "dma_device_type": 2 00:14:59.527 } 00:14:59.527 ], 00:14:59.527 "driver_specific": {} 00:14:59.527 } 00:14:59.527 ] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.527 BaseBdev3 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.527 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.528 [ 00:14:59.528 { 00:14:59.528 "name": "BaseBdev3", 00:14:59.528 "aliases": [ 00:14:59.528 "10850e1d-d3f5-4da5-8b65-a940fd221a0a" 00:14:59.528 ], 00:14:59.528 "product_name": "Malloc disk", 00:14:59.528 "block_size": 512, 00:14:59.528 "num_blocks": 65536, 00:14:59.528 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:14:59.528 "assigned_rate_limits": { 00:14:59.528 "rw_ios_per_sec": 0, 00:14:59.528 "rw_mbytes_per_sec": 0, 00:14:59.528 "r_mbytes_per_sec": 0, 00:14:59.528 "w_mbytes_per_sec": 0 00:14:59.528 }, 00:14:59.528 "claimed": false, 00:14:59.528 "zoned": false, 00:14:59.528 "supported_io_types": { 00:14:59.528 "read": true, 00:14:59.528 "write": true, 00:14:59.528 "unmap": true, 00:14:59.528 "flush": true, 00:14:59.528 "reset": true, 00:14:59.528 "nvme_admin": false, 00:14:59.528 "nvme_io": false, 00:14:59.528 "nvme_io_md": false, 00:14:59.528 "write_zeroes": true, 00:14:59.528 "zcopy": true, 00:14:59.528 "get_zone_info": false, 00:14:59.528 "zone_management": false, 00:14:59.528 "zone_append": false, 00:14:59.528 "compare": false, 00:14:59.528 "compare_and_write": false, 00:14:59.528 "abort": true, 00:14:59.528 "seek_hole": false, 00:14:59.528 "seek_data": false, 00:14:59.528 "copy": true, 00:14:59.528 "nvme_iov_md": false 00:14:59.528 }, 00:14:59.528 "memory_domains": [ 00:14:59.528 { 00:14:59.528 "dma_device_id": "system", 00:14:59.528 "dma_device_type": 1 00:14:59.528 }, 00:14:59.528 { 00:14:59.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.528 "dma_device_type": 2 00:14:59.528 } 00:14:59.528 ], 00:14:59.528 "driver_specific": {} 00:14:59.528 } 00:14:59.528 ] 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.528 [2024-12-12 05:53:06.920716] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:59.528 [2024-12-12 05:53:06.920824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:59.528 [2024-12-12 05:53:06.920865] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.528 [2024-12-12 05:53:06.922592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.528 "name": "Existed_Raid", 00:14:59.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.528 "strip_size_kb": 64, 00:14:59.528 "state": "configuring", 00:14:59.528 "raid_level": "raid5f", 00:14:59.528 "superblock": false, 00:14:59.528 "num_base_bdevs": 3, 00:14:59.528 "num_base_bdevs_discovered": 2, 00:14:59.528 "num_base_bdevs_operational": 3, 00:14:59.528 "base_bdevs_list": [ 00:14:59.528 { 00:14:59.528 "name": "BaseBdev1", 00:14:59.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.528 "is_configured": false, 00:14:59.528 "data_offset": 0, 00:14:59.528 "data_size": 0 00:14:59.528 }, 00:14:59.528 { 00:14:59.528 "name": "BaseBdev2", 00:14:59.528 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:14:59.528 "is_configured": true, 00:14:59.528 "data_offset": 0, 00:14:59.528 "data_size": 65536 00:14:59.528 }, 00:14:59.528 { 00:14:59.528 "name": "BaseBdev3", 00:14:59.528 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:14:59.528 "is_configured": true, 00:14:59.528 "data_offset": 0, 00:14:59.528 "data_size": 65536 00:14:59.528 } 00:14:59.528 ] 00:14:59.528 }' 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.528 05:53:06 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.097 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:00.097 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.097 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.098 [2024-12-12 05:53:07.371954] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.098 "name": "Existed_Raid", 00:15:00.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.098 "strip_size_kb": 64, 00:15:00.098 "state": "configuring", 00:15:00.098 "raid_level": "raid5f", 00:15:00.098 "superblock": false, 00:15:00.098 "num_base_bdevs": 3, 00:15:00.098 "num_base_bdevs_discovered": 1, 00:15:00.098 "num_base_bdevs_operational": 3, 00:15:00.098 "base_bdevs_list": [ 00:15:00.098 { 00:15:00.098 "name": "BaseBdev1", 00:15:00.098 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.098 "is_configured": false, 00:15:00.098 "data_offset": 0, 00:15:00.098 "data_size": 0 00:15:00.098 }, 00:15:00.098 { 00:15:00.098 "name": null, 00:15:00.098 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:00.098 "is_configured": false, 00:15:00.098 "data_offset": 0, 00:15:00.098 "data_size": 65536 00:15:00.098 }, 00:15:00.098 { 00:15:00.098 "name": "BaseBdev3", 00:15:00.098 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:00.098 "is_configured": true, 00:15:00.098 "data_offset": 0, 00:15:00.098 "data_size": 65536 00:15:00.098 } 00:15:00.098 ] 00:15:00.098 }' 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.098 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.358 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.358 [2024-12-12 05:53:07.879014] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:00.618 BaseBdev1 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:00.618 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 [ 00:15:00.619 { 00:15:00.619 "name": "BaseBdev1", 00:15:00.619 "aliases": [ 00:15:00.619 "4762e510-c51e-4e5b-836b-e1b8c721f5a3" 00:15:00.619 ], 00:15:00.619 "product_name": "Malloc disk", 00:15:00.619 "block_size": 512, 00:15:00.619 "num_blocks": 65536, 00:15:00.619 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:00.619 "assigned_rate_limits": { 00:15:00.619 "rw_ios_per_sec": 0, 00:15:00.619 "rw_mbytes_per_sec": 0, 00:15:00.619 "r_mbytes_per_sec": 0, 00:15:00.619 "w_mbytes_per_sec": 0 00:15:00.619 }, 00:15:00.619 "claimed": true, 00:15:00.619 "claim_type": "exclusive_write", 00:15:00.619 "zoned": false, 00:15:00.619 "supported_io_types": { 00:15:00.619 "read": true, 00:15:00.619 "write": true, 00:15:00.619 "unmap": true, 00:15:00.619 "flush": true, 00:15:00.619 "reset": true, 00:15:00.619 "nvme_admin": false, 00:15:00.619 "nvme_io": false, 00:15:00.619 "nvme_io_md": false, 00:15:00.619 "write_zeroes": true, 00:15:00.619 "zcopy": true, 00:15:00.619 "get_zone_info": false, 00:15:00.619 "zone_management": false, 00:15:00.619 "zone_append": false, 00:15:00.619 "compare": false, 00:15:00.619 "compare_and_write": false, 00:15:00.619 "abort": true, 00:15:00.619 "seek_hole": false, 00:15:00.619 "seek_data": false, 00:15:00.619 "copy": true, 00:15:00.619 "nvme_iov_md": false 00:15:00.619 }, 00:15:00.619 "memory_domains": [ 00:15:00.619 { 00:15:00.619 "dma_device_id": "system", 00:15:00.619 "dma_device_type": 1 00:15:00.619 }, 00:15:00.619 { 00:15:00.619 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.619 "dma_device_type": 2 00:15:00.619 } 00:15:00.619 ], 00:15:00.619 "driver_specific": {} 00:15:00.619 } 00:15:00.619 ] 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:00.619 "name": "Existed_Raid", 00:15:00.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:00.619 "strip_size_kb": 64, 00:15:00.619 "state": "configuring", 00:15:00.619 "raid_level": "raid5f", 00:15:00.619 "superblock": false, 00:15:00.619 "num_base_bdevs": 3, 00:15:00.619 "num_base_bdevs_discovered": 2, 00:15:00.619 "num_base_bdevs_operational": 3, 00:15:00.619 "base_bdevs_list": [ 00:15:00.619 { 00:15:00.619 "name": "BaseBdev1", 00:15:00.619 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:00.619 "is_configured": true, 00:15:00.619 "data_offset": 0, 00:15:00.619 "data_size": 65536 00:15:00.619 }, 00:15:00.619 { 00:15:00.619 "name": null, 00:15:00.619 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:00.619 "is_configured": false, 00:15:00.619 "data_offset": 0, 00:15:00.619 "data_size": 65536 00:15:00.619 }, 00:15:00.619 { 00:15:00.619 "name": "BaseBdev3", 00:15:00.619 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:00.619 "is_configured": true, 00:15:00.619 "data_offset": 0, 00:15:00.619 "data_size": 65536 00:15:00.619 } 00:15:00.619 ] 00:15:00.619 }' 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:00.619 05:53:07 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.879 [2024-12-12 05:53:08.390225] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:00.879 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.139 "name": "Existed_Raid", 00:15:01.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.139 "strip_size_kb": 64, 00:15:01.139 "state": "configuring", 00:15:01.139 "raid_level": "raid5f", 00:15:01.139 "superblock": false, 00:15:01.139 "num_base_bdevs": 3, 00:15:01.139 "num_base_bdevs_discovered": 1, 00:15:01.139 "num_base_bdevs_operational": 3, 00:15:01.139 "base_bdevs_list": [ 00:15:01.139 { 00:15:01.139 "name": "BaseBdev1", 00:15:01.139 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:01.139 "is_configured": true, 00:15:01.139 "data_offset": 0, 00:15:01.139 "data_size": 65536 00:15:01.139 }, 00:15:01.139 { 00:15:01.139 "name": null, 00:15:01.139 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:01.139 "is_configured": false, 00:15:01.139 "data_offset": 0, 00:15:01.139 "data_size": 65536 00:15:01.139 }, 00:15:01.139 { 00:15:01.139 "name": null, 00:15:01.139 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:01.139 "is_configured": false, 00:15:01.139 "data_offset": 0, 00:15:01.139 "data_size": 65536 00:15:01.139 } 00:15:01.139 ] 00:15:01.139 }' 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.139 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.399 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.399 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.399 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.399 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.400 [2024-12-12 05:53:08.853473] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.400 "name": "Existed_Raid", 00:15:01.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.400 "strip_size_kb": 64, 00:15:01.400 "state": "configuring", 00:15:01.400 "raid_level": "raid5f", 00:15:01.400 "superblock": false, 00:15:01.400 "num_base_bdevs": 3, 00:15:01.400 "num_base_bdevs_discovered": 2, 00:15:01.400 "num_base_bdevs_operational": 3, 00:15:01.400 "base_bdevs_list": [ 00:15:01.400 { 00:15:01.400 "name": "BaseBdev1", 00:15:01.400 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:01.400 "is_configured": true, 00:15:01.400 "data_offset": 0, 00:15:01.400 "data_size": 65536 00:15:01.400 }, 00:15:01.400 { 00:15:01.400 "name": null, 00:15:01.400 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:01.400 "is_configured": false, 00:15:01.400 "data_offset": 0, 00:15:01.400 "data_size": 65536 00:15:01.400 }, 00:15:01.400 { 00:15:01.400 "name": "BaseBdev3", 00:15:01.400 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:01.400 "is_configured": true, 00:15:01.400 "data_offset": 0, 00:15:01.400 "data_size": 65536 00:15:01.400 } 00:15:01.400 ] 00:15:01.400 }' 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.400 05:53:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.972 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:01.972 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.972 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.972 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.973 [2024-12-12 05:53:09.316694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:01.973 "name": "Existed_Raid", 00:15:01.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:01.973 "strip_size_kb": 64, 00:15:01.973 "state": "configuring", 00:15:01.973 "raid_level": "raid5f", 00:15:01.973 "superblock": false, 00:15:01.973 "num_base_bdevs": 3, 00:15:01.973 "num_base_bdevs_discovered": 1, 00:15:01.973 "num_base_bdevs_operational": 3, 00:15:01.973 "base_bdevs_list": [ 00:15:01.973 { 00:15:01.973 "name": null, 00:15:01.973 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:01.973 "is_configured": false, 00:15:01.973 "data_offset": 0, 00:15:01.973 "data_size": 65536 00:15:01.973 }, 00:15:01.973 { 00:15:01.973 "name": null, 00:15:01.973 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:01.973 "is_configured": false, 00:15:01.973 "data_offset": 0, 00:15:01.973 "data_size": 65536 00:15:01.973 }, 00:15:01.973 { 00:15:01.973 "name": "BaseBdev3", 00:15:01.973 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:01.973 "is_configured": true, 00:15:01.973 "data_offset": 0, 00:15:01.973 "data_size": 65536 00:15:01.973 } 00:15:01.973 ] 00:15:01.973 }' 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:01.973 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 [2024-12-12 05:53:09.853378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:02.543 "name": "Existed_Raid", 00:15:02.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:02.543 "strip_size_kb": 64, 00:15:02.543 "state": "configuring", 00:15:02.543 "raid_level": "raid5f", 00:15:02.543 "superblock": false, 00:15:02.543 "num_base_bdevs": 3, 00:15:02.543 "num_base_bdevs_discovered": 2, 00:15:02.543 "num_base_bdevs_operational": 3, 00:15:02.543 "base_bdevs_list": [ 00:15:02.543 { 00:15:02.543 "name": null, 00:15:02.543 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:02.543 "is_configured": false, 00:15:02.543 "data_offset": 0, 00:15:02.543 "data_size": 65536 00:15:02.543 }, 00:15:02.543 { 00:15:02.543 "name": "BaseBdev2", 00:15:02.543 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:02.543 "is_configured": true, 00:15:02.543 "data_offset": 0, 00:15:02.543 "data_size": 65536 00:15:02.543 }, 00:15:02.543 { 00:15:02.543 "name": "BaseBdev3", 00:15:02.543 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:02.543 "is_configured": true, 00:15:02.543 "data_offset": 0, 00:15:02.543 "data_size": 65536 00:15:02.543 } 00:15:02.543 ] 00:15:02.543 }' 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:02.543 05:53:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:02.803 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:02.803 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:02.803 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.803 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.063 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.063 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:03.063 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.063 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.063 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4762e510-c51e-4e5b-836b-e1b8c721f5a3 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.064 [2024-12-12 05:53:10.416273] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:03.064 [2024-12-12 05:53:10.416314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:03.064 [2024-12-12 05:53:10.416323] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:03.064 [2024-12-12 05:53:10.416623] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:03.064 [2024-12-12 05:53:10.421785] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:03.064 NewBaseBdev 00:15:03.064 [2024-12-12 05:53:10.421905] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:03.064 [2024-12-12 05:53:10.422193] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.064 [ 00:15:03.064 { 00:15:03.064 "name": "NewBaseBdev", 00:15:03.064 "aliases": [ 00:15:03.064 "4762e510-c51e-4e5b-836b-e1b8c721f5a3" 00:15:03.064 ], 00:15:03.064 "product_name": "Malloc disk", 00:15:03.064 "block_size": 512, 00:15:03.064 "num_blocks": 65536, 00:15:03.064 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:03.064 "assigned_rate_limits": { 00:15:03.064 "rw_ios_per_sec": 0, 00:15:03.064 "rw_mbytes_per_sec": 0, 00:15:03.064 "r_mbytes_per_sec": 0, 00:15:03.064 "w_mbytes_per_sec": 0 00:15:03.064 }, 00:15:03.064 "claimed": true, 00:15:03.064 "claim_type": "exclusive_write", 00:15:03.064 "zoned": false, 00:15:03.064 "supported_io_types": { 00:15:03.064 "read": true, 00:15:03.064 "write": true, 00:15:03.064 "unmap": true, 00:15:03.064 "flush": true, 00:15:03.064 "reset": true, 00:15:03.064 "nvme_admin": false, 00:15:03.064 "nvme_io": false, 00:15:03.064 "nvme_io_md": false, 00:15:03.064 "write_zeroes": true, 00:15:03.064 "zcopy": true, 00:15:03.064 "get_zone_info": false, 00:15:03.064 "zone_management": false, 00:15:03.064 "zone_append": false, 00:15:03.064 "compare": false, 00:15:03.064 "compare_and_write": false, 00:15:03.064 "abort": true, 00:15:03.064 "seek_hole": false, 00:15:03.064 "seek_data": false, 00:15:03.064 "copy": true, 00:15:03.064 "nvme_iov_md": false 00:15:03.064 }, 00:15:03.064 "memory_domains": [ 00:15:03.064 { 00:15:03.064 "dma_device_id": "system", 00:15:03.064 "dma_device_type": 1 00:15:03.064 }, 00:15:03.064 { 00:15:03.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:03.064 "dma_device_type": 2 00:15:03.064 } 00:15:03.064 ], 00:15:03.064 "driver_specific": {} 00:15:03.064 } 00:15:03.064 ] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.064 "name": "Existed_Raid", 00:15:03.064 "uuid": "e77e5b62-3e10-4a46-8759-ff037f93dfbd", 00:15:03.064 "strip_size_kb": 64, 00:15:03.064 "state": "online", 00:15:03.064 "raid_level": "raid5f", 00:15:03.064 "superblock": false, 00:15:03.064 "num_base_bdevs": 3, 00:15:03.064 "num_base_bdevs_discovered": 3, 00:15:03.064 "num_base_bdevs_operational": 3, 00:15:03.064 "base_bdevs_list": [ 00:15:03.064 { 00:15:03.064 "name": "NewBaseBdev", 00:15:03.064 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:03.064 "is_configured": true, 00:15:03.064 "data_offset": 0, 00:15:03.064 "data_size": 65536 00:15:03.064 }, 00:15:03.064 { 00:15:03.064 "name": "BaseBdev2", 00:15:03.064 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:03.064 "is_configured": true, 00:15:03.064 "data_offset": 0, 00:15:03.064 "data_size": 65536 00:15:03.064 }, 00:15:03.064 { 00:15:03.064 "name": "BaseBdev3", 00:15:03.064 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:03.064 "is_configured": true, 00:15:03.064 "data_offset": 0, 00:15:03.064 "data_size": 65536 00:15:03.064 } 00:15:03.064 ] 00:15:03.064 }' 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.064 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.634 [2024-12-12 05:53:10.935770] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:03.634 "name": "Existed_Raid", 00:15:03.634 "aliases": [ 00:15:03.634 "e77e5b62-3e10-4a46-8759-ff037f93dfbd" 00:15:03.634 ], 00:15:03.634 "product_name": "Raid Volume", 00:15:03.634 "block_size": 512, 00:15:03.634 "num_blocks": 131072, 00:15:03.634 "uuid": "e77e5b62-3e10-4a46-8759-ff037f93dfbd", 00:15:03.634 "assigned_rate_limits": { 00:15:03.634 "rw_ios_per_sec": 0, 00:15:03.634 "rw_mbytes_per_sec": 0, 00:15:03.634 "r_mbytes_per_sec": 0, 00:15:03.634 "w_mbytes_per_sec": 0 00:15:03.634 }, 00:15:03.634 "claimed": false, 00:15:03.634 "zoned": false, 00:15:03.634 "supported_io_types": { 00:15:03.634 "read": true, 00:15:03.634 "write": true, 00:15:03.634 "unmap": false, 00:15:03.634 "flush": false, 00:15:03.634 "reset": true, 00:15:03.634 "nvme_admin": false, 00:15:03.634 "nvme_io": false, 00:15:03.634 "nvme_io_md": false, 00:15:03.634 "write_zeroes": true, 00:15:03.634 "zcopy": false, 00:15:03.634 "get_zone_info": false, 00:15:03.634 "zone_management": false, 00:15:03.634 "zone_append": false, 00:15:03.634 "compare": false, 00:15:03.634 "compare_and_write": false, 00:15:03.634 "abort": false, 00:15:03.634 "seek_hole": false, 00:15:03.634 "seek_data": false, 00:15:03.634 "copy": false, 00:15:03.634 "nvme_iov_md": false 00:15:03.634 }, 00:15:03.634 "driver_specific": { 00:15:03.634 "raid": { 00:15:03.634 "uuid": "e77e5b62-3e10-4a46-8759-ff037f93dfbd", 00:15:03.634 "strip_size_kb": 64, 00:15:03.634 "state": "online", 00:15:03.634 "raid_level": "raid5f", 00:15:03.634 "superblock": false, 00:15:03.634 "num_base_bdevs": 3, 00:15:03.634 "num_base_bdevs_discovered": 3, 00:15:03.634 "num_base_bdevs_operational": 3, 00:15:03.634 "base_bdevs_list": [ 00:15:03.634 { 00:15:03.634 "name": "NewBaseBdev", 00:15:03.634 "uuid": "4762e510-c51e-4e5b-836b-e1b8c721f5a3", 00:15:03.634 "is_configured": true, 00:15:03.634 "data_offset": 0, 00:15:03.634 "data_size": 65536 00:15:03.634 }, 00:15:03.634 { 00:15:03.634 "name": "BaseBdev2", 00:15:03.634 "uuid": "75f6393e-1b0d-4b58-bea8-9cda4a9dd730", 00:15:03.634 "is_configured": true, 00:15:03.634 "data_offset": 0, 00:15:03.634 "data_size": 65536 00:15:03.634 }, 00:15:03.634 { 00:15:03.634 "name": "BaseBdev3", 00:15:03.634 "uuid": "10850e1d-d3f5-4da5-8b65-a940fd221a0a", 00:15:03.634 "is_configured": true, 00:15:03.634 "data_offset": 0, 00:15:03.634 "data_size": 65536 00:15:03.634 } 00:15:03.634 ] 00:15:03.634 } 00:15:03.634 } 00:15:03.634 }' 00:15:03.634 05:53:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:03.634 BaseBdev2 00:15:03.634 BaseBdev3' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.634 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:03.895 [2024-12-12 05:53:11.199161] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:03.895 [2024-12-12 05:53:11.199185] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:03.895 [2024-12-12 05:53:11.199245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:03.895 [2024-12-12 05:53:11.199528] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:03.895 [2024-12-12 05:53:11.199542] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80286 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80286 ']' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80286 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80286 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:03.895 killing process with pid 80286 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80286' 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80286 00:15:03.895 [2024-12-12 05:53:11.247197] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:03.895 05:53:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80286 00:15:04.154 [2024-12-12 05:53:11.526886] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:05.094 ************************************ 00:15:05.094 END TEST raid5f_state_function_test 00:15:05.094 ************************************ 00:15:05.094 05:53:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:05.094 00:15:05.094 real 0m10.310s 00:15:05.094 user 0m16.411s 00:15:05.094 sys 0m1.858s 00:15:05.094 05:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:05.094 05:53:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.354 05:53:12 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:05.354 05:53:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:05.354 05:53:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:05.354 05:53:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:05.354 ************************************ 00:15:05.354 START TEST raid5f_state_function_test_sb 00:15:05.354 ************************************ 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:05.354 Process raid pid: 80847 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80847 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80847' 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80847 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80847 ']' 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:05.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:05.354 05:53:12 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:05.354 [2024-12-12 05:53:12.748272] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:15:05.354 [2024-12-12 05:53:12.748475] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:05.613 [2024-12-12 05:53:12.927968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.613 [2024-12-12 05:53:13.037150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:05.872 [2024-12-12 05:53:13.233494] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:05.872 [2024-12-12 05:53:13.233616] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.132 [2024-12-12 05:53:13.551843] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.132 [2024-12-12 05:53:13.551969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.132 [2024-12-12 05:53:13.552002] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.132 [2024-12-12 05:53:13.552026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.132 [2024-12-12 05:53:13.552044] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.132 [2024-12-12 05:53:13.552065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.132 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.133 "name": "Existed_Raid", 00:15:06.133 "uuid": "4274b222-0f5d-4e97-adfe-ef396b2715e3", 00:15:06.133 "strip_size_kb": 64, 00:15:06.133 "state": "configuring", 00:15:06.133 "raid_level": "raid5f", 00:15:06.133 "superblock": true, 00:15:06.133 "num_base_bdevs": 3, 00:15:06.133 "num_base_bdevs_discovered": 0, 00:15:06.133 "num_base_bdevs_operational": 3, 00:15:06.133 "base_bdevs_list": [ 00:15:06.133 { 00:15:06.133 "name": "BaseBdev1", 00:15:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.133 "is_configured": false, 00:15:06.133 "data_offset": 0, 00:15:06.133 "data_size": 0 00:15:06.133 }, 00:15:06.133 { 00:15:06.133 "name": "BaseBdev2", 00:15:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.133 "is_configured": false, 00:15:06.133 "data_offset": 0, 00:15:06.133 "data_size": 0 00:15:06.133 }, 00:15:06.133 { 00:15:06.133 "name": "BaseBdev3", 00:15:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.133 "is_configured": false, 00:15:06.133 "data_offset": 0, 00:15:06.133 "data_size": 0 00:15:06.133 } 00:15:06.133 ] 00:15:06.133 }' 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.133 05:53:13 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.701 [2024-12-12 05:53:14.014973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:06.701 [2024-12-12 05:53:14.015008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.701 [2024-12-12 05:53:14.026959] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:06.701 [2024-12-12 05:53:14.027001] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:06.701 [2024-12-12 05:53:14.027010] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:06.701 [2024-12-12 05:53:14.027034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:06.701 [2024-12-12 05:53:14.027040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:06.701 [2024-12-12 05:53:14.027048] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.701 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 [2024-12-12 05:53:14.073007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:06.702 BaseBdev1 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 [ 00:15:06.702 { 00:15:06.702 "name": "BaseBdev1", 00:15:06.702 "aliases": [ 00:15:06.702 "39d9475c-d285-4b72-816c-ce7154ac6971" 00:15:06.702 ], 00:15:06.702 "product_name": "Malloc disk", 00:15:06.702 "block_size": 512, 00:15:06.702 "num_blocks": 65536, 00:15:06.702 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:06.702 "assigned_rate_limits": { 00:15:06.702 "rw_ios_per_sec": 0, 00:15:06.702 "rw_mbytes_per_sec": 0, 00:15:06.702 "r_mbytes_per_sec": 0, 00:15:06.702 "w_mbytes_per_sec": 0 00:15:06.702 }, 00:15:06.702 "claimed": true, 00:15:06.702 "claim_type": "exclusive_write", 00:15:06.702 "zoned": false, 00:15:06.702 "supported_io_types": { 00:15:06.702 "read": true, 00:15:06.702 "write": true, 00:15:06.702 "unmap": true, 00:15:06.702 "flush": true, 00:15:06.702 "reset": true, 00:15:06.702 "nvme_admin": false, 00:15:06.702 "nvme_io": false, 00:15:06.702 "nvme_io_md": false, 00:15:06.702 "write_zeroes": true, 00:15:06.702 "zcopy": true, 00:15:06.702 "get_zone_info": false, 00:15:06.702 "zone_management": false, 00:15:06.702 "zone_append": false, 00:15:06.702 "compare": false, 00:15:06.702 "compare_and_write": false, 00:15:06.702 "abort": true, 00:15:06.702 "seek_hole": false, 00:15:06.702 "seek_data": false, 00:15:06.702 "copy": true, 00:15:06.702 "nvme_iov_md": false 00:15:06.702 }, 00:15:06.702 "memory_domains": [ 00:15:06.702 { 00:15:06.702 "dma_device_id": "system", 00:15:06.702 "dma_device_type": 1 00:15:06.702 }, 00:15:06.702 { 00:15:06.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.702 "dma_device_type": 2 00:15:06.702 } 00:15:06.702 ], 00:15:06.702 "driver_specific": {} 00:15:06.702 } 00:15:06.702 ] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:06.702 "name": "Existed_Raid", 00:15:06.702 "uuid": "f775ff6b-2440-45b5-ba86-6db44ae7fe32", 00:15:06.702 "strip_size_kb": 64, 00:15:06.702 "state": "configuring", 00:15:06.702 "raid_level": "raid5f", 00:15:06.702 "superblock": true, 00:15:06.702 "num_base_bdevs": 3, 00:15:06.702 "num_base_bdevs_discovered": 1, 00:15:06.702 "num_base_bdevs_operational": 3, 00:15:06.702 "base_bdevs_list": [ 00:15:06.702 { 00:15:06.702 "name": "BaseBdev1", 00:15:06.702 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:06.702 "is_configured": true, 00:15:06.702 "data_offset": 2048, 00:15:06.702 "data_size": 63488 00:15:06.702 }, 00:15:06.702 { 00:15:06.702 "name": "BaseBdev2", 00:15:06.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.702 "is_configured": false, 00:15:06.702 "data_offset": 0, 00:15:06.702 "data_size": 0 00:15:06.702 }, 00:15:06.702 { 00:15:06.702 "name": "BaseBdev3", 00:15:06.702 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.702 "is_configured": false, 00:15:06.702 "data_offset": 0, 00:15:06.702 "data_size": 0 00:15:06.702 } 00:15:06.702 ] 00:15:06.702 }' 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:06.702 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 [2024-12-12 05:53:14.564181] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:07.294 [2024-12-12 05:53:14.564221] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 [2024-12-12 05:53:14.576225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:07.294 [2024-12-12 05:53:14.577930] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:07.294 [2024-12-12 05:53:14.577972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:07.294 [2024-12-12 05:53:14.577982] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:07.294 [2024-12-12 05:53:14.577990] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.294 "name": "Existed_Raid", 00:15:07.294 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:07.294 "strip_size_kb": 64, 00:15:07.294 "state": "configuring", 00:15:07.294 "raid_level": "raid5f", 00:15:07.294 "superblock": true, 00:15:07.294 "num_base_bdevs": 3, 00:15:07.294 "num_base_bdevs_discovered": 1, 00:15:07.294 "num_base_bdevs_operational": 3, 00:15:07.294 "base_bdevs_list": [ 00:15:07.294 { 00:15:07.294 "name": "BaseBdev1", 00:15:07.294 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:07.294 "is_configured": true, 00:15:07.294 "data_offset": 2048, 00:15:07.294 "data_size": 63488 00:15:07.294 }, 00:15:07.294 { 00:15:07.294 "name": "BaseBdev2", 00:15:07.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.294 "is_configured": false, 00:15:07.294 "data_offset": 0, 00:15:07.294 "data_size": 0 00:15:07.294 }, 00:15:07.294 { 00:15:07.294 "name": "BaseBdev3", 00:15:07.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.294 "is_configured": false, 00:15:07.294 "data_offset": 0, 00:15:07.294 "data_size": 0 00:15:07.294 } 00:15:07.294 ] 00:15:07.294 }' 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.294 05:53:14 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 [2024-12-12 05:53:15.061394] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:07.555 BaseBdev2 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.555 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.815 [ 00:15:07.815 { 00:15:07.815 "name": "BaseBdev2", 00:15:07.815 "aliases": [ 00:15:07.815 "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0" 00:15:07.815 ], 00:15:07.815 "product_name": "Malloc disk", 00:15:07.815 "block_size": 512, 00:15:07.815 "num_blocks": 65536, 00:15:07.815 "uuid": "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0", 00:15:07.815 "assigned_rate_limits": { 00:15:07.815 "rw_ios_per_sec": 0, 00:15:07.815 "rw_mbytes_per_sec": 0, 00:15:07.815 "r_mbytes_per_sec": 0, 00:15:07.815 "w_mbytes_per_sec": 0 00:15:07.815 }, 00:15:07.815 "claimed": true, 00:15:07.815 "claim_type": "exclusive_write", 00:15:07.815 "zoned": false, 00:15:07.815 "supported_io_types": { 00:15:07.815 "read": true, 00:15:07.815 "write": true, 00:15:07.815 "unmap": true, 00:15:07.815 "flush": true, 00:15:07.815 "reset": true, 00:15:07.815 "nvme_admin": false, 00:15:07.815 "nvme_io": false, 00:15:07.815 "nvme_io_md": false, 00:15:07.815 "write_zeroes": true, 00:15:07.815 "zcopy": true, 00:15:07.815 "get_zone_info": false, 00:15:07.815 "zone_management": false, 00:15:07.815 "zone_append": false, 00:15:07.815 "compare": false, 00:15:07.815 "compare_and_write": false, 00:15:07.815 "abort": true, 00:15:07.815 "seek_hole": false, 00:15:07.815 "seek_data": false, 00:15:07.815 "copy": true, 00:15:07.815 "nvme_iov_md": false 00:15:07.815 }, 00:15:07.815 "memory_domains": [ 00:15:07.816 { 00:15:07.816 "dma_device_id": "system", 00:15:07.816 "dma_device_type": 1 00:15:07.816 }, 00:15:07.816 { 00:15:07.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.816 "dma_device_type": 2 00:15:07.816 } 00:15:07.816 ], 00:15:07.816 "driver_specific": {} 00:15:07.816 } 00:15:07.816 ] 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.816 "name": "Existed_Raid", 00:15:07.816 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:07.816 "strip_size_kb": 64, 00:15:07.816 "state": "configuring", 00:15:07.816 "raid_level": "raid5f", 00:15:07.816 "superblock": true, 00:15:07.816 "num_base_bdevs": 3, 00:15:07.816 "num_base_bdevs_discovered": 2, 00:15:07.816 "num_base_bdevs_operational": 3, 00:15:07.816 "base_bdevs_list": [ 00:15:07.816 { 00:15:07.816 "name": "BaseBdev1", 00:15:07.816 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:07.816 "is_configured": true, 00:15:07.816 "data_offset": 2048, 00:15:07.816 "data_size": 63488 00:15:07.816 }, 00:15:07.816 { 00:15:07.816 "name": "BaseBdev2", 00:15:07.816 "uuid": "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0", 00:15:07.816 "is_configured": true, 00:15:07.816 "data_offset": 2048, 00:15:07.816 "data_size": 63488 00:15:07.816 }, 00:15:07.816 { 00:15:07.816 "name": "BaseBdev3", 00:15:07.816 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.816 "is_configured": false, 00:15:07.816 "data_offset": 0, 00:15:07.816 "data_size": 0 00:15:07.816 } 00:15:07.816 ] 00:15:07.816 }' 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.816 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.075 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:08.075 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.075 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.335 [2024-12-12 05:53:15.631492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:08.335 [2024-12-12 05:53:15.631899] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:08.335 [2024-12-12 05:53:15.631959] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:08.335 [2024-12-12 05:53:15.632267] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:08.335 BaseBdev3 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.335 [2024-12-12 05:53:15.637720] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:08.335 [2024-12-12 05:53:15.637777] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:08.335 [2024-12-12 05:53:15.638012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.335 [ 00:15:08.335 { 00:15:08.335 "name": "BaseBdev3", 00:15:08.335 "aliases": [ 00:15:08.335 "3ee45ee2-f1f0-4947-b167-52b2aeccee65" 00:15:08.335 ], 00:15:08.335 "product_name": "Malloc disk", 00:15:08.335 "block_size": 512, 00:15:08.335 "num_blocks": 65536, 00:15:08.335 "uuid": "3ee45ee2-f1f0-4947-b167-52b2aeccee65", 00:15:08.335 "assigned_rate_limits": { 00:15:08.335 "rw_ios_per_sec": 0, 00:15:08.335 "rw_mbytes_per_sec": 0, 00:15:08.335 "r_mbytes_per_sec": 0, 00:15:08.335 "w_mbytes_per_sec": 0 00:15:08.335 }, 00:15:08.335 "claimed": true, 00:15:08.335 "claim_type": "exclusive_write", 00:15:08.335 "zoned": false, 00:15:08.335 "supported_io_types": { 00:15:08.335 "read": true, 00:15:08.335 "write": true, 00:15:08.335 "unmap": true, 00:15:08.335 "flush": true, 00:15:08.335 "reset": true, 00:15:08.335 "nvme_admin": false, 00:15:08.335 "nvme_io": false, 00:15:08.335 "nvme_io_md": false, 00:15:08.335 "write_zeroes": true, 00:15:08.335 "zcopy": true, 00:15:08.335 "get_zone_info": false, 00:15:08.335 "zone_management": false, 00:15:08.335 "zone_append": false, 00:15:08.335 "compare": false, 00:15:08.335 "compare_and_write": false, 00:15:08.335 "abort": true, 00:15:08.335 "seek_hole": false, 00:15:08.335 "seek_data": false, 00:15:08.335 "copy": true, 00:15:08.335 "nvme_iov_md": false 00:15:08.335 }, 00:15:08.335 "memory_domains": [ 00:15:08.335 { 00:15:08.335 "dma_device_id": "system", 00:15:08.335 "dma_device_type": 1 00:15:08.335 }, 00:15:08.335 { 00:15:08.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.335 "dma_device_type": 2 00:15:08.335 } 00:15:08.335 ], 00:15:08.335 "driver_specific": {} 00:15:08.335 } 00:15:08.335 ] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:08.335 "name": "Existed_Raid", 00:15:08.335 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:08.335 "strip_size_kb": 64, 00:15:08.335 "state": "online", 00:15:08.335 "raid_level": "raid5f", 00:15:08.335 "superblock": true, 00:15:08.335 "num_base_bdevs": 3, 00:15:08.335 "num_base_bdevs_discovered": 3, 00:15:08.335 "num_base_bdevs_operational": 3, 00:15:08.335 "base_bdevs_list": [ 00:15:08.335 { 00:15:08.335 "name": "BaseBdev1", 00:15:08.335 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:08.335 "is_configured": true, 00:15:08.335 "data_offset": 2048, 00:15:08.335 "data_size": 63488 00:15:08.335 }, 00:15:08.335 { 00:15:08.335 "name": "BaseBdev2", 00:15:08.335 "uuid": "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0", 00:15:08.335 "is_configured": true, 00:15:08.335 "data_offset": 2048, 00:15:08.335 "data_size": 63488 00:15:08.335 }, 00:15:08.335 { 00:15:08.335 "name": "BaseBdev3", 00:15:08.335 "uuid": "3ee45ee2-f1f0-4947-b167-52b2aeccee65", 00:15:08.335 "is_configured": true, 00:15:08.335 "data_offset": 2048, 00:15:08.335 "data_size": 63488 00:15:08.335 } 00:15:08.335 ] 00:15:08.335 }' 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:08.335 05:53:15 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.595 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:08.595 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.854 [2024-12-12 05:53:16.131142] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:08.854 "name": "Existed_Raid", 00:15:08.854 "aliases": [ 00:15:08.854 "b39ae850-1a86-453b-b8df-8c5d9aae3f6e" 00:15:08.854 ], 00:15:08.854 "product_name": "Raid Volume", 00:15:08.854 "block_size": 512, 00:15:08.854 "num_blocks": 126976, 00:15:08.854 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:08.854 "assigned_rate_limits": { 00:15:08.854 "rw_ios_per_sec": 0, 00:15:08.854 "rw_mbytes_per_sec": 0, 00:15:08.854 "r_mbytes_per_sec": 0, 00:15:08.854 "w_mbytes_per_sec": 0 00:15:08.854 }, 00:15:08.854 "claimed": false, 00:15:08.854 "zoned": false, 00:15:08.854 "supported_io_types": { 00:15:08.854 "read": true, 00:15:08.854 "write": true, 00:15:08.854 "unmap": false, 00:15:08.854 "flush": false, 00:15:08.854 "reset": true, 00:15:08.854 "nvme_admin": false, 00:15:08.854 "nvme_io": false, 00:15:08.854 "nvme_io_md": false, 00:15:08.854 "write_zeroes": true, 00:15:08.854 "zcopy": false, 00:15:08.854 "get_zone_info": false, 00:15:08.854 "zone_management": false, 00:15:08.854 "zone_append": false, 00:15:08.854 "compare": false, 00:15:08.854 "compare_and_write": false, 00:15:08.854 "abort": false, 00:15:08.854 "seek_hole": false, 00:15:08.854 "seek_data": false, 00:15:08.854 "copy": false, 00:15:08.854 "nvme_iov_md": false 00:15:08.854 }, 00:15:08.854 "driver_specific": { 00:15:08.854 "raid": { 00:15:08.854 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:08.854 "strip_size_kb": 64, 00:15:08.854 "state": "online", 00:15:08.854 "raid_level": "raid5f", 00:15:08.854 "superblock": true, 00:15:08.854 "num_base_bdevs": 3, 00:15:08.854 "num_base_bdevs_discovered": 3, 00:15:08.854 "num_base_bdevs_operational": 3, 00:15:08.854 "base_bdevs_list": [ 00:15:08.854 { 00:15:08.854 "name": "BaseBdev1", 00:15:08.854 "uuid": "39d9475c-d285-4b72-816c-ce7154ac6971", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 }, 00:15:08.854 { 00:15:08.854 "name": "BaseBdev2", 00:15:08.854 "uuid": "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 }, 00:15:08.854 { 00:15:08.854 "name": "BaseBdev3", 00:15:08.854 "uuid": "3ee45ee2-f1f0-4947-b167-52b2aeccee65", 00:15:08.854 "is_configured": true, 00:15:08.854 "data_offset": 2048, 00:15:08.854 "data_size": 63488 00:15:08.854 } 00:15:08.854 ] 00:15:08.854 } 00:15:08.854 } 00:15:08.854 }' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:08.854 BaseBdev2 00:15:08.854 BaseBdev3' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:08.854 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.855 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:08.855 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.115 [2024-12-12 05:53:16.402541] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:09.115 "name": "Existed_Raid", 00:15:09.115 "uuid": "b39ae850-1a86-453b-b8df-8c5d9aae3f6e", 00:15:09.115 "strip_size_kb": 64, 00:15:09.115 "state": "online", 00:15:09.115 "raid_level": "raid5f", 00:15:09.115 "superblock": true, 00:15:09.115 "num_base_bdevs": 3, 00:15:09.115 "num_base_bdevs_discovered": 2, 00:15:09.115 "num_base_bdevs_operational": 2, 00:15:09.115 "base_bdevs_list": [ 00:15:09.115 { 00:15:09.115 "name": null, 00:15:09.115 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.115 "is_configured": false, 00:15:09.115 "data_offset": 0, 00:15:09.115 "data_size": 63488 00:15:09.115 }, 00:15:09.115 { 00:15:09.115 "name": "BaseBdev2", 00:15:09.115 "uuid": "65514da7-d8f8-4d4e-9ecf-82381c9bc3a0", 00:15:09.115 "is_configured": true, 00:15:09.115 "data_offset": 2048, 00:15:09.115 "data_size": 63488 00:15:09.115 }, 00:15:09.115 { 00:15:09.115 "name": "BaseBdev3", 00:15:09.115 "uuid": "3ee45ee2-f1f0-4947-b167-52b2aeccee65", 00:15:09.115 "is_configured": true, 00:15:09.115 "data_offset": 2048, 00:15:09.115 "data_size": 63488 00:15:09.115 } 00:15:09.115 ] 00:15:09.115 }' 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:09.115 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.685 05:53:16 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.685 [2024-12-12 05:53:16.991450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:09.685 [2024-12-12 05:53:16.991662] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:09.685 [2024-12-12 05:53:17.082438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.685 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.685 [2024-12-12 05:53:17.130371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:09.685 [2024-12-12 05:53:17.130429] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 BaseBdev2 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 [ 00:15:09.945 { 00:15:09.945 "name": "BaseBdev2", 00:15:09.945 "aliases": [ 00:15:09.945 "29928575-b015-46b2-8f65-008cb3a9fec2" 00:15:09.945 ], 00:15:09.945 "product_name": "Malloc disk", 00:15:09.945 "block_size": 512, 00:15:09.945 "num_blocks": 65536, 00:15:09.945 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:09.945 "assigned_rate_limits": { 00:15:09.945 "rw_ios_per_sec": 0, 00:15:09.945 "rw_mbytes_per_sec": 0, 00:15:09.945 "r_mbytes_per_sec": 0, 00:15:09.945 "w_mbytes_per_sec": 0 00:15:09.945 }, 00:15:09.945 "claimed": false, 00:15:09.945 "zoned": false, 00:15:09.945 "supported_io_types": { 00:15:09.945 "read": true, 00:15:09.945 "write": true, 00:15:09.945 "unmap": true, 00:15:09.945 "flush": true, 00:15:09.945 "reset": true, 00:15:09.945 "nvme_admin": false, 00:15:09.945 "nvme_io": false, 00:15:09.945 "nvme_io_md": false, 00:15:09.945 "write_zeroes": true, 00:15:09.945 "zcopy": true, 00:15:09.945 "get_zone_info": false, 00:15:09.945 "zone_management": false, 00:15:09.945 "zone_append": false, 00:15:09.945 "compare": false, 00:15:09.945 "compare_and_write": false, 00:15:09.945 "abort": true, 00:15:09.945 "seek_hole": false, 00:15:09.945 "seek_data": false, 00:15:09.945 "copy": true, 00:15:09.945 "nvme_iov_md": false 00:15:09.945 }, 00:15:09.945 "memory_domains": [ 00:15:09.945 { 00:15:09.945 "dma_device_id": "system", 00:15:09.945 "dma_device_type": 1 00:15:09.945 }, 00:15:09.945 { 00:15:09.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.945 "dma_device_type": 2 00:15:09.945 } 00:15:09.945 ], 00:15:09.945 "driver_specific": {} 00:15:09.945 } 00:15:09.945 ] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 BaseBdev3 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.945 [ 00:15:09.945 { 00:15:09.945 "name": "BaseBdev3", 00:15:09.945 "aliases": [ 00:15:09.945 "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9" 00:15:09.945 ], 00:15:09.945 "product_name": "Malloc disk", 00:15:09.945 "block_size": 512, 00:15:09.945 "num_blocks": 65536, 00:15:09.945 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:09.945 "assigned_rate_limits": { 00:15:09.945 "rw_ios_per_sec": 0, 00:15:09.945 "rw_mbytes_per_sec": 0, 00:15:09.945 "r_mbytes_per_sec": 0, 00:15:09.945 "w_mbytes_per_sec": 0 00:15:09.945 }, 00:15:09.945 "claimed": false, 00:15:09.945 "zoned": false, 00:15:09.945 "supported_io_types": { 00:15:09.945 "read": true, 00:15:09.945 "write": true, 00:15:09.945 "unmap": true, 00:15:09.945 "flush": true, 00:15:09.945 "reset": true, 00:15:09.945 "nvme_admin": false, 00:15:09.945 "nvme_io": false, 00:15:09.945 "nvme_io_md": false, 00:15:09.945 "write_zeroes": true, 00:15:09.945 "zcopy": true, 00:15:09.945 "get_zone_info": false, 00:15:09.945 "zone_management": false, 00:15:09.945 "zone_append": false, 00:15:09.945 "compare": false, 00:15:09.945 "compare_and_write": false, 00:15:09.945 "abort": true, 00:15:09.945 "seek_hole": false, 00:15:09.945 "seek_data": false, 00:15:09.945 "copy": true, 00:15:09.945 "nvme_iov_md": false 00:15:09.945 }, 00:15:09.945 "memory_domains": [ 00:15:09.945 { 00:15:09.945 "dma_device_id": "system", 00:15:09.945 "dma_device_type": 1 00:15:09.945 }, 00:15:09.945 { 00:15:09.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.945 "dma_device_type": 2 00:15:09.945 } 00:15:09.945 ], 00:15:09.945 "driver_specific": {} 00:15:09.945 } 00:15:09.945 ] 00:15:09.945 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:09.946 [2024-12-12 05:53:17.434627] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:09.946 [2024-12-12 05:53:17.434724] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:09.946 [2024-12-12 05:53:17.434762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:09.946 [2024-12-12 05:53:17.436529] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.946 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.205 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.205 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.205 "name": "Existed_Raid", 00:15:10.205 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:10.205 "strip_size_kb": 64, 00:15:10.205 "state": "configuring", 00:15:10.205 "raid_level": "raid5f", 00:15:10.205 "superblock": true, 00:15:10.205 "num_base_bdevs": 3, 00:15:10.205 "num_base_bdevs_discovered": 2, 00:15:10.205 "num_base_bdevs_operational": 3, 00:15:10.205 "base_bdevs_list": [ 00:15:10.206 { 00:15:10.206 "name": "BaseBdev1", 00:15:10.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.206 "is_configured": false, 00:15:10.206 "data_offset": 0, 00:15:10.206 "data_size": 0 00:15:10.206 }, 00:15:10.206 { 00:15:10.206 "name": "BaseBdev2", 00:15:10.206 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:10.206 "is_configured": true, 00:15:10.206 "data_offset": 2048, 00:15:10.206 "data_size": 63488 00:15:10.206 }, 00:15:10.206 { 00:15:10.206 "name": "BaseBdev3", 00:15:10.206 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:10.206 "is_configured": true, 00:15:10.206 "data_offset": 2048, 00:15:10.206 "data_size": 63488 00:15:10.206 } 00:15:10.206 ] 00:15:10.206 }' 00:15:10.206 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.206 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.465 [2024-12-12 05:53:17.906062] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:10.465 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:10.466 "name": "Existed_Raid", 00:15:10.466 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:10.466 "strip_size_kb": 64, 00:15:10.466 "state": "configuring", 00:15:10.466 "raid_level": "raid5f", 00:15:10.466 "superblock": true, 00:15:10.466 "num_base_bdevs": 3, 00:15:10.466 "num_base_bdevs_discovered": 1, 00:15:10.466 "num_base_bdevs_operational": 3, 00:15:10.466 "base_bdevs_list": [ 00:15:10.466 { 00:15:10.466 "name": "BaseBdev1", 00:15:10.466 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.466 "is_configured": false, 00:15:10.466 "data_offset": 0, 00:15:10.466 "data_size": 0 00:15:10.466 }, 00:15:10.466 { 00:15:10.466 "name": null, 00:15:10.466 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:10.466 "is_configured": false, 00:15:10.466 "data_offset": 0, 00:15:10.466 "data_size": 63488 00:15:10.466 }, 00:15:10.466 { 00:15:10.466 "name": "BaseBdev3", 00:15:10.466 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:10.466 "is_configured": true, 00:15:10.466 "data_offset": 2048, 00:15:10.466 "data_size": 63488 00:15:10.466 } 00:15:10.466 ] 00:15:10.466 }' 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:10.466 05:53:17 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 [2024-12-12 05:53:18.409136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:11.035 BaseBdev1 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 [ 00:15:11.035 { 00:15:11.035 "name": "BaseBdev1", 00:15:11.035 "aliases": [ 00:15:11.035 "0b35eb47-88e3-4dda-a7c2-46c75ede77bc" 00:15:11.035 ], 00:15:11.035 "product_name": "Malloc disk", 00:15:11.035 "block_size": 512, 00:15:11.035 "num_blocks": 65536, 00:15:11.035 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:11.035 "assigned_rate_limits": { 00:15:11.035 "rw_ios_per_sec": 0, 00:15:11.035 "rw_mbytes_per_sec": 0, 00:15:11.035 "r_mbytes_per_sec": 0, 00:15:11.035 "w_mbytes_per_sec": 0 00:15:11.035 }, 00:15:11.035 "claimed": true, 00:15:11.035 "claim_type": "exclusive_write", 00:15:11.035 "zoned": false, 00:15:11.035 "supported_io_types": { 00:15:11.035 "read": true, 00:15:11.035 "write": true, 00:15:11.035 "unmap": true, 00:15:11.035 "flush": true, 00:15:11.035 "reset": true, 00:15:11.035 "nvme_admin": false, 00:15:11.035 "nvme_io": false, 00:15:11.035 "nvme_io_md": false, 00:15:11.035 "write_zeroes": true, 00:15:11.035 "zcopy": true, 00:15:11.035 "get_zone_info": false, 00:15:11.035 "zone_management": false, 00:15:11.035 "zone_append": false, 00:15:11.035 "compare": false, 00:15:11.035 "compare_and_write": false, 00:15:11.035 "abort": true, 00:15:11.035 "seek_hole": false, 00:15:11.035 "seek_data": false, 00:15:11.035 "copy": true, 00:15:11.035 "nvme_iov_md": false 00:15:11.035 }, 00:15:11.035 "memory_domains": [ 00:15:11.035 { 00:15:11.035 "dma_device_id": "system", 00:15:11.035 "dma_device_type": 1 00:15:11.035 }, 00:15:11.035 { 00:15:11.035 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:11.035 "dma_device_type": 2 00:15:11.035 } 00:15:11.035 ], 00:15:11.035 "driver_specific": {} 00:15:11.035 } 00:15:11.035 ] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.035 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.035 "name": "Existed_Raid", 00:15:11.035 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:11.035 "strip_size_kb": 64, 00:15:11.035 "state": "configuring", 00:15:11.035 "raid_level": "raid5f", 00:15:11.035 "superblock": true, 00:15:11.035 "num_base_bdevs": 3, 00:15:11.035 "num_base_bdevs_discovered": 2, 00:15:11.035 "num_base_bdevs_operational": 3, 00:15:11.035 "base_bdevs_list": [ 00:15:11.035 { 00:15:11.035 "name": "BaseBdev1", 00:15:11.035 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:11.035 "is_configured": true, 00:15:11.035 "data_offset": 2048, 00:15:11.035 "data_size": 63488 00:15:11.035 }, 00:15:11.035 { 00:15:11.035 "name": null, 00:15:11.035 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:11.035 "is_configured": false, 00:15:11.035 "data_offset": 0, 00:15:11.035 "data_size": 63488 00:15:11.035 }, 00:15:11.035 { 00:15:11.035 "name": "BaseBdev3", 00:15:11.035 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:11.035 "is_configured": true, 00:15:11.035 "data_offset": 2048, 00:15:11.036 "data_size": 63488 00:15:11.036 } 00:15:11.036 ] 00:15:11.036 }' 00:15:11.036 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.036 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 [2024-12-12 05:53:18.952255] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.604 "name": "Existed_Raid", 00:15:11.604 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:11.604 "strip_size_kb": 64, 00:15:11.604 "state": "configuring", 00:15:11.604 "raid_level": "raid5f", 00:15:11.604 "superblock": true, 00:15:11.604 "num_base_bdevs": 3, 00:15:11.604 "num_base_bdevs_discovered": 1, 00:15:11.604 "num_base_bdevs_operational": 3, 00:15:11.604 "base_bdevs_list": [ 00:15:11.604 { 00:15:11.604 "name": "BaseBdev1", 00:15:11.604 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:11.604 "is_configured": true, 00:15:11.604 "data_offset": 2048, 00:15:11.604 "data_size": 63488 00:15:11.604 }, 00:15:11.604 { 00:15:11.604 "name": null, 00:15:11.604 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:11.604 "is_configured": false, 00:15:11.604 "data_offset": 0, 00:15:11.604 "data_size": 63488 00:15:11.604 }, 00:15:11.604 { 00:15:11.604 "name": null, 00:15:11.604 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:11.604 "is_configured": false, 00:15:11.604 "data_offset": 0, 00:15:11.604 "data_size": 63488 00:15:11.604 } 00:15:11.604 ] 00:15:11.604 }' 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.604 05:53:18 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.864 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.864 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:11.864 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.864 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:11.864 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.124 [2024-12-12 05:53:19.419479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.124 "name": "Existed_Raid", 00:15:12.124 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:12.124 "strip_size_kb": 64, 00:15:12.124 "state": "configuring", 00:15:12.124 "raid_level": "raid5f", 00:15:12.124 "superblock": true, 00:15:12.124 "num_base_bdevs": 3, 00:15:12.124 "num_base_bdevs_discovered": 2, 00:15:12.124 "num_base_bdevs_operational": 3, 00:15:12.124 "base_bdevs_list": [ 00:15:12.124 { 00:15:12.124 "name": "BaseBdev1", 00:15:12.124 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:12.124 "is_configured": true, 00:15:12.124 "data_offset": 2048, 00:15:12.124 "data_size": 63488 00:15:12.124 }, 00:15:12.124 { 00:15:12.124 "name": null, 00:15:12.124 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:12.124 "is_configured": false, 00:15:12.124 "data_offset": 0, 00:15:12.124 "data_size": 63488 00:15:12.124 }, 00:15:12.124 { 00:15:12.124 "name": "BaseBdev3", 00:15:12.124 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:12.124 "is_configured": true, 00:15:12.124 "data_offset": 2048, 00:15:12.124 "data_size": 63488 00:15:12.124 } 00:15:12.124 ] 00:15:12.124 }' 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.124 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.384 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.384 [2024-12-12 05:53:19.842739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.644 "name": "Existed_Raid", 00:15:12.644 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:12.644 "strip_size_kb": 64, 00:15:12.644 "state": "configuring", 00:15:12.644 "raid_level": "raid5f", 00:15:12.644 "superblock": true, 00:15:12.644 "num_base_bdevs": 3, 00:15:12.644 "num_base_bdevs_discovered": 1, 00:15:12.644 "num_base_bdevs_operational": 3, 00:15:12.644 "base_bdevs_list": [ 00:15:12.644 { 00:15:12.644 "name": null, 00:15:12.644 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:12.644 "is_configured": false, 00:15:12.644 "data_offset": 0, 00:15:12.644 "data_size": 63488 00:15:12.644 }, 00:15:12.644 { 00:15:12.644 "name": null, 00:15:12.644 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:12.644 "is_configured": false, 00:15:12.644 "data_offset": 0, 00:15:12.644 "data_size": 63488 00:15:12.644 }, 00:15:12.644 { 00:15:12.644 "name": "BaseBdev3", 00:15:12.644 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:12.644 "is_configured": true, 00:15:12.644 "data_offset": 2048, 00:15:12.644 "data_size": 63488 00:15:12.644 } 00:15:12.644 ] 00:15:12.644 }' 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.644 05:53:19 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:12.904 [2024-12-12 05:53:20.393126] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.904 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.163 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.163 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.163 "name": "Existed_Raid", 00:15:13.163 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:13.163 "strip_size_kb": 64, 00:15:13.163 "state": "configuring", 00:15:13.163 "raid_level": "raid5f", 00:15:13.163 "superblock": true, 00:15:13.163 "num_base_bdevs": 3, 00:15:13.163 "num_base_bdevs_discovered": 2, 00:15:13.163 "num_base_bdevs_operational": 3, 00:15:13.163 "base_bdevs_list": [ 00:15:13.163 { 00:15:13.163 "name": null, 00:15:13.163 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:13.163 "is_configured": false, 00:15:13.164 "data_offset": 0, 00:15:13.164 "data_size": 63488 00:15:13.164 }, 00:15:13.164 { 00:15:13.164 "name": "BaseBdev2", 00:15:13.164 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:13.164 "is_configured": true, 00:15:13.164 "data_offset": 2048, 00:15:13.164 "data_size": 63488 00:15:13.164 }, 00:15:13.164 { 00:15:13.164 "name": "BaseBdev3", 00:15:13.164 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:13.164 "is_configured": true, 00:15:13.164 "data_offset": 2048, 00:15:13.164 "data_size": 63488 00:15:13.164 } 00:15:13.164 ] 00:15:13.164 }' 00:15:13.164 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.164 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 0b35eb47-88e3-4dda-a7c2-46c75ede77bc 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 [2024-12-12 05:53:20.874884] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:13.423 [2024-12-12 05:53:20.875073] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:13.423 [2024-12-12 05:53:20.875089] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:13.423 [2024-12-12 05:53:20.875309] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:13.423 NewBaseBdev 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 [2024-12-12 05:53:20.880618] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:13.423 [2024-12-12 05:53:20.880641] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:13.423 [2024-12-12 05:53:20.880806] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 [ 00:15:13.423 { 00:15:13.423 "name": "NewBaseBdev", 00:15:13.423 "aliases": [ 00:15:13.423 "0b35eb47-88e3-4dda-a7c2-46c75ede77bc" 00:15:13.423 ], 00:15:13.423 "product_name": "Malloc disk", 00:15:13.423 "block_size": 512, 00:15:13.423 "num_blocks": 65536, 00:15:13.423 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:13.423 "assigned_rate_limits": { 00:15:13.423 "rw_ios_per_sec": 0, 00:15:13.423 "rw_mbytes_per_sec": 0, 00:15:13.423 "r_mbytes_per_sec": 0, 00:15:13.423 "w_mbytes_per_sec": 0 00:15:13.423 }, 00:15:13.423 "claimed": true, 00:15:13.423 "claim_type": "exclusive_write", 00:15:13.423 "zoned": false, 00:15:13.423 "supported_io_types": { 00:15:13.423 "read": true, 00:15:13.423 "write": true, 00:15:13.423 "unmap": true, 00:15:13.423 "flush": true, 00:15:13.423 "reset": true, 00:15:13.423 "nvme_admin": false, 00:15:13.423 "nvme_io": false, 00:15:13.423 "nvme_io_md": false, 00:15:13.423 "write_zeroes": true, 00:15:13.423 "zcopy": true, 00:15:13.423 "get_zone_info": false, 00:15:13.423 "zone_management": false, 00:15:13.423 "zone_append": false, 00:15:13.423 "compare": false, 00:15:13.423 "compare_and_write": false, 00:15:13.423 "abort": true, 00:15:13.423 "seek_hole": false, 00:15:13.423 "seek_data": false, 00:15:13.423 "copy": true, 00:15:13.423 "nvme_iov_md": false 00:15:13.423 }, 00:15:13.423 "memory_domains": [ 00:15:13.423 { 00:15:13.423 "dma_device_id": "system", 00:15:13.423 "dma_device_type": 1 00:15:13.423 }, 00:15:13.423 { 00:15:13.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.423 "dma_device_type": 2 00:15:13.423 } 00:15:13.423 ], 00:15:13.423 "driver_specific": {} 00:15:13.423 } 00:15:13.423 ] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.423 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.683 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:13.683 "name": "Existed_Raid", 00:15:13.683 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:13.683 "strip_size_kb": 64, 00:15:13.683 "state": "online", 00:15:13.683 "raid_level": "raid5f", 00:15:13.683 "superblock": true, 00:15:13.683 "num_base_bdevs": 3, 00:15:13.683 "num_base_bdevs_discovered": 3, 00:15:13.683 "num_base_bdevs_operational": 3, 00:15:13.683 "base_bdevs_list": [ 00:15:13.683 { 00:15:13.683 "name": "NewBaseBdev", 00:15:13.683 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:13.683 "is_configured": true, 00:15:13.683 "data_offset": 2048, 00:15:13.683 "data_size": 63488 00:15:13.683 }, 00:15:13.683 { 00:15:13.683 "name": "BaseBdev2", 00:15:13.683 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:13.683 "is_configured": true, 00:15:13.683 "data_offset": 2048, 00:15:13.683 "data_size": 63488 00:15:13.683 }, 00:15:13.683 { 00:15:13.683 "name": "BaseBdev3", 00:15:13.683 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:13.683 "is_configured": true, 00:15:13.683 "data_offset": 2048, 00:15:13.683 "data_size": 63488 00:15:13.683 } 00:15:13.683 ] 00:15:13.683 }' 00:15:13.683 05:53:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:13.683 05:53:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:13.944 [2024-12-12 05:53:21.342206] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:13.944 "name": "Existed_Raid", 00:15:13.944 "aliases": [ 00:15:13.944 "c487bda1-5251-4184-a0f6-38c838d10233" 00:15:13.944 ], 00:15:13.944 "product_name": "Raid Volume", 00:15:13.944 "block_size": 512, 00:15:13.944 "num_blocks": 126976, 00:15:13.944 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:13.944 "assigned_rate_limits": { 00:15:13.944 "rw_ios_per_sec": 0, 00:15:13.944 "rw_mbytes_per_sec": 0, 00:15:13.944 "r_mbytes_per_sec": 0, 00:15:13.944 "w_mbytes_per_sec": 0 00:15:13.944 }, 00:15:13.944 "claimed": false, 00:15:13.944 "zoned": false, 00:15:13.944 "supported_io_types": { 00:15:13.944 "read": true, 00:15:13.944 "write": true, 00:15:13.944 "unmap": false, 00:15:13.944 "flush": false, 00:15:13.944 "reset": true, 00:15:13.944 "nvme_admin": false, 00:15:13.944 "nvme_io": false, 00:15:13.944 "nvme_io_md": false, 00:15:13.944 "write_zeroes": true, 00:15:13.944 "zcopy": false, 00:15:13.944 "get_zone_info": false, 00:15:13.944 "zone_management": false, 00:15:13.944 "zone_append": false, 00:15:13.944 "compare": false, 00:15:13.944 "compare_and_write": false, 00:15:13.944 "abort": false, 00:15:13.944 "seek_hole": false, 00:15:13.944 "seek_data": false, 00:15:13.944 "copy": false, 00:15:13.944 "nvme_iov_md": false 00:15:13.944 }, 00:15:13.944 "driver_specific": { 00:15:13.944 "raid": { 00:15:13.944 "uuid": "c487bda1-5251-4184-a0f6-38c838d10233", 00:15:13.944 "strip_size_kb": 64, 00:15:13.944 "state": "online", 00:15:13.944 "raid_level": "raid5f", 00:15:13.944 "superblock": true, 00:15:13.944 "num_base_bdevs": 3, 00:15:13.944 "num_base_bdevs_discovered": 3, 00:15:13.944 "num_base_bdevs_operational": 3, 00:15:13.944 "base_bdevs_list": [ 00:15:13.944 { 00:15:13.944 "name": "NewBaseBdev", 00:15:13.944 "uuid": "0b35eb47-88e3-4dda-a7c2-46c75ede77bc", 00:15:13.944 "is_configured": true, 00:15:13.944 "data_offset": 2048, 00:15:13.944 "data_size": 63488 00:15:13.944 }, 00:15:13.944 { 00:15:13.944 "name": "BaseBdev2", 00:15:13.944 "uuid": "29928575-b015-46b2-8f65-008cb3a9fec2", 00:15:13.944 "is_configured": true, 00:15:13.944 "data_offset": 2048, 00:15:13.944 "data_size": 63488 00:15:13.944 }, 00:15:13.944 { 00:15:13.944 "name": "BaseBdev3", 00:15:13.944 "uuid": "2da4a40b-7d51-4af3-9e96-e2f06fa4cda9", 00:15:13.944 "is_configured": true, 00:15:13.944 "data_offset": 2048, 00:15:13.944 "data_size": 63488 00:15:13.944 } 00:15:13.944 ] 00:15:13.944 } 00:15:13.944 } 00:15:13.944 }' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:13.944 BaseBdev2 00:15:13.944 BaseBdev3' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:13.944 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.204 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.204 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:14.204 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.204 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.204 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:14.205 [2024-12-12 05:53:21.621573] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:14.205 [2024-12-12 05:53:21.621597] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:14.205 [2024-12-12 05:53:21.621667] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:14.205 [2024-12-12 05:53:21.621932] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:14.205 [2024-12-12 05:53:21.621944] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80847 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80847 ']' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80847 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80847 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80847' 00:15:14.205 killing process with pid 80847 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80847 00:15:14.205 [2024-12-12 05:53:21.667854] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:14.205 05:53:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80847 00:15:14.464 [2024-12-12 05:53:21.945087] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.845 05:53:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:15:15.846 00:15:15.846 real 0m10.340s 00:15:15.846 user 0m16.562s 00:15:15.846 sys 0m1.790s 00:15:15.846 05:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.846 05:53:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:15.846 ************************************ 00:15:15.846 END TEST raid5f_state_function_test_sb 00:15:15.846 ************************************ 00:15:15.846 05:53:23 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:15:15.846 05:53:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:15:15.846 05:53:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:15.846 05:53:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.846 ************************************ 00:15:15.846 START TEST raid5f_superblock_test 00:15:15.846 ************************************ 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81403 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81403 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81403 ']' 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:15.846 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.846 [2024-12-12 05:53:23.148850] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:15:15.846 [2024-12-12 05:53:23.148976] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81403 ] 00:15:15.846 [2024-12-12 05:53:23.318747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.105 [2024-12-12 05:53:23.426874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.365 [2024-12-12 05:53:23.627119] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.365 [2024-12-12 05:53:23.627251] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.628 05:53:23 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 malloc1 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 [2024-12-12 05:53:24.023018] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:16.628 [2024-12-12 05:53:24.023143] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.628 [2024-12-12 05:53:24.023184] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.628 [2024-12-12 05:53:24.023212] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.628 [2024-12-12 05:53:24.025353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.628 [2024-12-12 05:53:24.025422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:16.628 pt1 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.628 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 malloc2 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.629 [2024-12-12 05:53:24.079753] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:16.629 [2024-12-12 05:53:24.079856] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.629 [2024-12-12 05:53:24.079892] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.629 [2024-12-12 05:53:24.079917] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.629 [2024-12-12 05:53:24.081940] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.629 [2024-12-12 05:53:24.082016] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:16.629 pt2 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.629 malloc3 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.629 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 [2024-12-12 05:53:24.147518] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:16.901 [2024-12-12 05:53:24.147612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.901 [2024-12-12 05:53:24.147649] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.901 [2024-12-12 05:53:24.147696] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.901 [2024-12-12 05:53:24.149729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.901 [2024-12-12 05:53:24.149801] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:16.901 pt3 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 [2024-12-12 05:53:24.159548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:16.901 [2024-12-12 05:53:24.161198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:16.901 [2024-12-12 05:53:24.161253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:16.901 [2024-12-12 05:53:24.161400] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.901 [2024-12-12 05:53:24.161419] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:16.901 [2024-12-12 05:53:24.161637] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:16.901 [2024-12-12 05:53:24.166918] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.901 [2024-12-12 05:53:24.166989] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.901 [2024-12-12 05:53:24.167199] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.901 "name": "raid_bdev1", 00:15:16.901 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:16.901 "strip_size_kb": 64, 00:15:16.901 "state": "online", 00:15:16.901 "raid_level": "raid5f", 00:15:16.901 "superblock": true, 00:15:16.901 "num_base_bdevs": 3, 00:15:16.901 "num_base_bdevs_discovered": 3, 00:15:16.901 "num_base_bdevs_operational": 3, 00:15:16.901 "base_bdevs_list": [ 00:15:16.901 { 00:15:16.901 "name": "pt1", 00:15:16.901 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:16.901 "is_configured": true, 00:15:16.901 "data_offset": 2048, 00:15:16.901 "data_size": 63488 00:15:16.901 }, 00:15:16.901 { 00:15:16.901 "name": "pt2", 00:15:16.901 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:16.901 "is_configured": true, 00:15:16.901 "data_offset": 2048, 00:15:16.901 "data_size": 63488 00:15:16.901 }, 00:15:16.901 { 00:15:16.901 "name": "pt3", 00:15:16.901 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:16.901 "is_configured": true, 00:15:16.901 "data_offset": 2048, 00:15:16.901 "data_size": 63488 00:15:16.901 } 00:15:16.901 ] 00:15:16.901 }' 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.901 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.175 [2024-12-12 05:53:24.584717] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:17.175 "name": "raid_bdev1", 00:15:17.175 "aliases": [ 00:15:17.175 "bc09c853-8b9e-4b8b-8502-948e52e4a3d8" 00:15:17.175 ], 00:15:17.175 "product_name": "Raid Volume", 00:15:17.175 "block_size": 512, 00:15:17.175 "num_blocks": 126976, 00:15:17.175 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:17.175 "assigned_rate_limits": { 00:15:17.175 "rw_ios_per_sec": 0, 00:15:17.175 "rw_mbytes_per_sec": 0, 00:15:17.175 "r_mbytes_per_sec": 0, 00:15:17.175 "w_mbytes_per_sec": 0 00:15:17.175 }, 00:15:17.175 "claimed": false, 00:15:17.175 "zoned": false, 00:15:17.175 "supported_io_types": { 00:15:17.175 "read": true, 00:15:17.175 "write": true, 00:15:17.175 "unmap": false, 00:15:17.175 "flush": false, 00:15:17.175 "reset": true, 00:15:17.175 "nvme_admin": false, 00:15:17.175 "nvme_io": false, 00:15:17.175 "nvme_io_md": false, 00:15:17.175 "write_zeroes": true, 00:15:17.175 "zcopy": false, 00:15:17.175 "get_zone_info": false, 00:15:17.175 "zone_management": false, 00:15:17.175 "zone_append": false, 00:15:17.175 "compare": false, 00:15:17.175 "compare_and_write": false, 00:15:17.175 "abort": false, 00:15:17.175 "seek_hole": false, 00:15:17.175 "seek_data": false, 00:15:17.175 "copy": false, 00:15:17.175 "nvme_iov_md": false 00:15:17.175 }, 00:15:17.175 "driver_specific": { 00:15:17.175 "raid": { 00:15:17.175 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:17.175 "strip_size_kb": 64, 00:15:17.175 "state": "online", 00:15:17.175 "raid_level": "raid5f", 00:15:17.175 "superblock": true, 00:15:17.175 "num_base_bdevs": 3, 00:15:17.175 "num_base_bdevs_discovered": 3, 00:15:17.175 "num_base_bdevs_operational": 3, 00:15:17.175 "base_bdevs_list": [ 00:15:17.175 { 00:15:17.175 "name": "pt1", 00:15:17.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.175 "is_configured": true, 00:15:17.175 "data_offset": 2048, 00:15:17.175 "data_size": 63488 00:15:17.175 }, 00:15:17.175 { 00:15:17.175 "name": "pt2", 00:15:17.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.175 "is_configured": true, 00:15:17.175 "data_offset": 2048, 00:15:17.175 "data_size": 63488 00:15:17.175 }, 00:15:17.175 { 00:15:17.175 "name": "pt3", 00:15:17.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.175 "is_configured": true, 00:15:17.175 "data_offset": 2048, 00:15:17.175 "data_size": 63488 00:15:17.175 } 00:15:17.175 ] 00:15:17.175 } 00:15:17.175 } 00:15:17.175 }' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:17.175 pt2 00:15:17.175 pt3' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.175 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:15:17.435 [2024-12-12 05:53:24.828242] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bc09c853-8b9e-4b8b-8502-948e52e4a3d8 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bc09c853-8b9e-4b8b-8502-948e52e4a3d8 ']' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 [2024-12-12 05:53:24.860028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.435 [2024-12-12 05:53:24.860052] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:17.435 [2024-12-12 05:53:24.860112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.435 [2024-12-12 05:53:24.860176] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.435 [2024-12-12 05:53:24.860185] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.435 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.436 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:15:17.436 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.436 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.436 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:17.696 05:53:24 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 05:53:24 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 [2024-12-12 05:53:25.011832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:17.696 [2024-12-12 05:53:25.013597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:17.696 [2024-12-12 05:53:25.013642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:17.696 [2024-12-12 05:53:25.013691] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:17.696 [2024-12-12 05:53:25.013736] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:17.696 [2024-12-12 05:53:25.013754] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:17.696 [2024-12-12 05:53:25.013768] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:17.696 [2024-12-12 05:53:25.013777] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:15:17.696 request: 00:15:17.696 { 00:15:17.696 "name": "raid_bdev1", 00:15:17.696 "raid_level": "raid5f", 00:15:17.696 "base_bdevs": [ 00:15:17.696 "malloc1", 00:15:17.696 "malloc2", 00:15:17.696 "malloc3" 00:15:17.696 ], 00:15:17.696 "strip_size_kb": 64, 00:15:17.696 "superblock": false, 00:15:17.696 "method": "bdev_raid_create", 00:15:17.696 "req_id": 1 00:15:17.696 } 00:15:17.696 Got JSON-RPC error response 00:15:17.696 response: 00:15:17.696 { 00:15:17.696 "code": -17, 00:15:17.696 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:17.696 } 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 [2024-12-12 05:53:25.079664] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:17.696 [2024-12-12 05:53:25.079751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.696 [2024-12-12 05:53:25.079784] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:17.696 [2024-12-12 05:53:25.079810] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.696 [2024-12-12 05:53:25.081875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.696 [2024-12-12 05:53:25.081938] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:17.696 [2024-12-12 05:53:25.082042] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:17.696 [2024-12-12 05:53:25.082104] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:17.696 pt1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.696 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.696 "name": "raid_bdev1", 00:15:17.696 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:17.696 "strip_size_kb": 64, 00:15:17.696 "state": "configuring", 00:15:17.696 "raid_level": "raid5f", 00:15:17.696 "superblock": true, 00:15:17.696 "num_base_bdevs": 3, 00:15:17.696 "num_base_bdevs_discovered": 1, 00:15:17.696 "num_base_bdevs_operational": 3, 00:15:17.696 "base_bdevs_list": [ 00:15:17.696 { 00:15:17.696 "name": "pt1", 00:15:17.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:17.696 "is_configured": true, 00:15:17.696 "data_offset": 2048, 00:15:17.696 "data_size": 63488 00:15:17.696 }, 00:15:17.696 { 00:15:17.696 "name": null, 00:15:17.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:17.696 "is_configured": false, 00:15:17.696 "data_offset": 2048, 00:15:17.696 "data_size": 63488 00:15:17.696 }, 00:15:17.696 { 00:15:17.696 "name": null, 00:15:17.696 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:17.696 "is_configured": false, 00:15:17.696 "data_offset": 2048, 00:15:17.696 "data_size": 63488 00:15:17.696 } 00:15:17.696 ] 00:15:17.697 }' 00:15:17.697 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.697 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 [2024-12-12 05:53:25.494950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.266 [2024-12-12 05:53:25.495000] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.266 [2024-12-12 05:53:25.495019] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:18.266 [2024-12-12 05:53:25.495027] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.266 [2024-12-12 05:53:25.495400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.266 [2024-12-12 05:53:25.495422] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.266 [2024-12-12 05:53:25.495492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.266 [2024-12-12 05:53:25.495538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.266 pt2 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 [2024-12-12 05:53:25.506943] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.266 "name": "raid_bdev1", 00:15:18.266 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:18.266 "strip_size_kb": 64, 00:15:18.266 "state": "configuring", 00:15:18.266 "raid_level": "raid5f", 00:15:18.266 "superblock": true, 00:15:18.266 "num_base_bdevs": 3, 00:15:18.266 "num_base_bdevs_discovered": 1, 00:15:18.266 "num_base_bdevs_operational": 3, 00:15:18.266 "base_bdevs_list": [ 00:15:18.266 { 00:15:18.266 "name": "pt1", 00:15:18.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.266 "is_configured": true, 00:15:18.266 "data_offset": 2048, 00:15:18.266 "data_size": 63488 00:15:18.266 }, 00:15:18.266 { 00:15:18.266 "name": null, 00:15:18.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.266 "is_configured": false, 00:15:18.266 "data_offset": 0, 00:15:18.266 "data_size": 63488 00:15:18.266 }, 00:15:18.266 { 00:15:18.266 "name": null, 00:15:18.266 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.266 "is_configured": false, 00:15:18.266 "data_offset": 2048, 00:15:18.266 "data_size": 63488 00:15:18.266 } 00:15:18.266 ] 00:15:18.266 }' 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.266 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.527 [2024-12-12 05:53:25.934205] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:18.527 [2024-12-12 05:53:25.934301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.527 [2024-12-12 05:53:25.934333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:15:18.527 [2024-12-12 05:53:25.934360] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.527 [2024-12-12 05:53:25.934890] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.527 [2024-12-12 05:53:25.934950] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:18.527 [2024-12-12 05:53:25.935051] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:18.527 [2024-12-12 05:53:25.935101] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:18.527 pt2 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.527 [2024-12-12 05:53:25.946185] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:18.527 [2024-12-12 05:53:25.946280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.527 [2024-12-12 05:53:25.946308] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:15:18.527 [2024-12-12 05:53:25.946334] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.527 [2024-12-12 05:53:25.946731] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.527 [2024-12-12 05:53:25.946793] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:18.527 [2024-12-12 05:53:25.946882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:18.527 [2024-12-12 05:53:25.946929] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:18.527 [2024-12-12 05:53:25.947065] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:18.527 [2024-12-12 05:53:25.947110] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:18.527 [2024-12-12 05:53:25.947371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:18.527 [2024-12-12 05:53:25.952529] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:18.527 [2024-12-12 05:53:25.952578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:15:18.527 [2024-12-12 05:53:25.952805] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:18.527 pt3 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:18.527 05:53:25 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.527 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:18.527 "name": "raid_bdev1", 00:15:18.527 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:18.527 "strip_size_kb": 64, 00:15:18.527 "state": "online", 00:15:18.527 "raid_level": "raid5f", 00:15:18.527 "superblock": true, 00:15:18.527 "num_base_bdevs": 3, 00:15:18.527 "num_base_bdevs_discovered": 3, 00:15:18.527 "num_base_bdevs_operational": 3, 00:15:18.527 "base_bdevs_list": [ 00:15:18.527 { 00:15:18.527 "name": "pt1", 00:15:18.527 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:18.527 "is_configured": true, 00:15:18.527 "data_offset": 2048, 00:15:18.527 "data_size": 63488 00:15:18.527 }, 00:15:18.527 { 00:15:18.527 "name": "pt2", 00:15:18.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:18.527 "is_configured": true, 00:15:18.527 "data_offset": 2048, 00:15:18.527 "data_size": 63488 00:15:18.527 }, 00:15:18.527 { 00:15:18.527 "name": "pt3", 00:15:18.527 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:18.527 "is_configured": true, 00:15:18.527 "data_offset": 2048, 00:15:18.527 "data_size": 63488 00:15:18.527 } 00:15:18.527 ] 00:15:18.527 }' 00:15:18.527 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:18.527 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 [2024-12-12 05:53:26.386726] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:19.097 "name": "raid_bdev1", 00:15:19.097 "aliases": [ 00:15:19.097 "bc09c853-8b9e-4b8b-8502-948e52e4a3d8" 00:15:19.097 ], 00:15:19.097 "product_name": "Raid Volume", 00:15:19.097 "block_size": 512, 00:15:19.097 "num_blocks": 126976, 00:15:19.097 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:19.097 "assigned_rate_limits": { 00:15:19.097 "rw_ios_per_sec": 0, 00:15:19.097 "rw_mbytes_per_sec": 0, 00:15:19.097 "r_mbytes_per_sec": 0, 00:15:19.097 "w_mbytes_per_sec": 0 00:15:19.097 }, 00:15:19.097 "claimed": false, 00:15:19.097 "zoned": false, 00:15:19.097 "supported_io_types": { 00:15:19.097 "read": true, 00:15:19.097 "write": true, 00:15:19.097 "unmap": false, 00:15:19.097 "flush": false, 00:15:19.097 "reset": true, 00:15:19.097 "nvme_admin": false, 00:15:19.097 "nvme_io": false, 00:15:19.097 "nvme_io_md": false, 00:15:19.097 "write_zeroes": true, 00:15:19.097 "zcopy": false, 00:15:19.097 "get_zone_info": false, 00:15:19.097 "zone_management": false, 00:15:19.097 "zone_append": false, 00:15:19.097 "compare": false, 00:15:19.097 "compare_and_write": false, 00:15:19.097 "abort": false, 00:15:19.097 "seek_hole": false, 00:15:19.097 "seek_data": false, 00:15:19.097 "copy": false, 00:15:19.097 "nvme_iov_md": false 00:15:19.097 }, 00:15:19.097 "driver_specific": { 00:15:19.097 "raid": { 00:15:19.097 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:19.097 "strip_size_kb": 64, 00:15:19.097 "state": "online", 00:15:19.097 "raid_level": "raid5f", 00:15:19.097 "superblock": true, 00:15:19.097 "num_base_bdevs": 3, 00:15:19.097 "num_base_bdevs_discovered": 3, 00:15:19.097 "num_base_bdevs_operational": 3, 00:15:19.097 "base_bdevs_list": [ 00:15:19.097 { 00:15:19.097 "name": "pt1", 00:15:19.097 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 }, 00:15:19.097 { 00:15:19.097 "name": "pt2", 00:15:19.097 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 }, 00:15:19.097 { 00:15:19.097 "name": "pt3", 00:15:19.097 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.097 "is_configured": true, 00:15:19.097 "data_offset": 2048, 00:15:19.097 "data_size": 63488 00:15:19.097 } 00:15:19.097 ] 00:15:19.097 } 00:15:19.097 } 00:15:19.097 }' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:15:19.097 pt2 00:15:19.097 pt3' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.097 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.359 [2024-12-12 05:53:26.618258] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bc09c853-8b9e-4b8b-8502-948e52e4a3d8 '!=' bc09c853-8b9e-4b8b-8502-948e52e4a3d8 ']' 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.359 [2024-12-12 05:53:26.662058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.359 "name": "raid_bdev1", 00:15:19.359 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:19.359 "strip_size_kb": 64, 00:15:19.359 "state": "online", 00:15:19.359 "raid_level": "raid5f", 00:15:19.359 "superblock": true, 00:15:19.359 "num_base_bdevs": 3, 00:15:19.359 "num_base_bdevs_discovered": 2, 00:15:19.359 "num_base_bdevs_operational": 2, 00:15:19.359 "base_bdevs_list": [ 00:15:19.359 { 00:15:19.359 "name": null, 00:15:19.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.359 "is_configured": false, 00:15:19.359 "data_offset": 0, 00:15:19.359 "data_size": 63488 00:15:19.359 }, 00:15:19.359 { 00:15:19.359 "name": "pt2", 00:15:19.359 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.359 "is_configured": true, 00:15:19.359 "data_offset": 2048, 00:15:19.359 "data_size": 63488 00:15:19.359 }, 00:15:19.359 { 00:15:19.359 "name": "pt3", 00:15:19.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.359 "is_configured": true, 00:15:19.359 "data_offset": 2048, 00:15:19.359 "data_size": 63488 00:15:19.359 } 00:15:19.359 ] 00:15:19.359 }' 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.359 05:53:26 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.619 [2024-12-12 05:53:27.049357] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:19.619 [2024-12-12 05:53:27.049424] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:19.619 [2024-12-12 05:53:27.049498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:19.619 [2024-12-12 05:53:27.049593] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:19.619 [2024-12-12 05:53:27.049654] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:19.619 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.620 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.620 [2024-12-12 05:53:27.133202] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:19.620 [2024-12-12 05:53:27.133254] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.620 [2024-12-12 05:53:27.133286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:19.620 [2024-12-12 05:53:27.133296] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.620 [2024-12-12 05:53:27.135390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.620 [2024-12-12 05:53:27.135430] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:19.620 [2024-12-12 05:53:27.135518] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:19.620 [2024-12-12 05:53:27.135570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:19.879 pt2 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.879 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.879 "name": "raid_bdev1", 00:15:19.879 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:19.879 "strip_size_kb": 64, 00:15:19.879 "state": "configuring", 00:15:19.879 "raid_level": "raid5f", 00:15:19.879 "superblock": true, 00:15:19.879 "num_base_bdevs": 3, 00:15:19.879 "num_base_bdevs_discovered": 1, 00:15:19.879 "num_base_bdevs_operational": 2, 00:15:19.879 "base_bdevs_list": [ 00:15:19.879 { 00:15:19.880 "name": null, 00:15:19.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.880 "is_configured": false, 00:15:19.880 "data_offset": 2048, 00:15:19.880 "data_size": 63488 00:15:19.880 }, 00:15:19.880 { 00:15:19.880 "name": "pt2", 00:15:19.880 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:19.880 "is_configured": true, 00:15:19.880 "data_offset": 2048, 00:15:19.880 "data_size": 63488 00:15:19.880 }, 00:15:19.880 { 00:15:19.880 "name": null, 00:15:19.880 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:19.880 "is_configured": false, 00:15:19.880 "data_offset": 2048, 00:15:19.880 "data_size": 63488 00:15:19.880 } 00:15:19.880 ] 00:15:19.880 }' 00:15:19.880 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.880 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.139 [2024-12-12 05:53:27.544509] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:20.139 [2024-12-12 05:53:27.544603] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.139 [2024-12-12 05:53:27.544639] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:15:20.139 [2024-12-12 05:53:27.544685] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.139 [2024-12-12 05:53:27.545128] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.139 [2024-12-12 05:53:27.545198] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:20.139 [2024-12-12 05:53:27.545303] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:20.139 [2024-12-12 05:53:27.545362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:20.139 [2024-12-12 05:53:27.545515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:20.139 [2024-12-12 05:53:27.545556] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:20.139 [2024-12-12 05:53:27.545835] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:20.139 [2024-12-12 05:53:27.551087] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:20.139 [2024-12-12 05:53:27.551141] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:15:20.139 [2024-12-12 05:53:27.551526] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.139 pt3 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.139 "name": "raid_bdev1", 00:15:20.139 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:20.139 "strip_size_kb": 64, 00:15:20.139 "state": "online", 00:15:20.139 "raid_level": "raid5f", 00:15:20.139 "superblock": true, 00:15:20.139 "num_base_bdevs": 3, 00:15:20.139 "num_base_bdevs_discovered": 2, 00:15:20.139 "num_base_bdevs_operational": 2, 00:15:20.139 "base_bdevs_list": [ 00:15:20.139 { 00:15:20.139 "name": null, 00:15:20.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.139 "is_configured": false, 00:15:20.139 "data_offset": 2048, 00:15:20.139 "data_size": 63488 00:15:20.139 }, 00:15:20.139 { 00:15:20.139 "name": "pt2", 00:15:20.139 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.139 "is_configured": true, 00:15:20.139 "data_offset": 2048, 00:15:20.139 "data_size": 63488 00:15:20.139 }, 00:15:20.139 { 00:15:20.139 "name": "pt3", 00:15:20.139 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.139 "is_configured": true, 00:15:20.139 "data_offset": 2048, 00:15:20.139 "data_size": 63488 00:15:20.139 } 00:15:20.139 ] 00:15:20.139 }' 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.139 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 [2024-12-12 05:53:27.973634] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.710 [2024-12-12 05:53:27.973660] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:20.710 [2024-12-12 05:53:27.973718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:20.710 [2024-12-12 05:53:27.973771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:20.710 [2024-12-12 05:53:27.973780] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 05:53:27 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.710 [2024-12-12 05:53:28.049547] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:20.710 [2024-12-12 05:53:28.049594] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.710 [2024-12-12 05:53:28.049609] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:20.710 [2024-12-12 05:53:28.049617] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.710 [2024-12-12 05:53:28.051665] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.710 [2024-12-12 05:53:28.051741] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:20.710 [2024-12-12 05:53:28.051825] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:20.710 [2024-12-12 05:53:28.051874] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:20.710 [2024-12-12 05:53:28.052016] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:15:20.710 [2024-12-12 05:53:28.052028] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:20.710 [2024-12-12 05:53:28.052042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:15:20.710 [2024-12-12 05:53:28.052094] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:20.710 pt1 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.710 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:20.711 "name": "raid_bdev1", 00:15:20.711 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:20.711 "strip_size_kb": 64, 00:15:20.711 "state": "configuring", 00:15:20.711 "raid_level": "raid5f", 00:15:20.711 "superblock": true, 00:15:20.711 "num_base_bdevs": 3, 00:15:20.711 "num_base_bdevs_discovered": 1, 00:15:20.711 "num_base_bdevs_operational": 2, 00:15:20.711 "base_bdevs_list": [ 00:15:20.711 { 00:15:20.711 "name": null, 00:15:20.711 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.711 "is_configured": false, 00:15:20.711 "data_offset": 2048, 00:15:20.711 "data_size": 63488 00:15:20.711 }, 00:15:20.711 { 00:15:20.711 "name": "pt2", 00:15:20.711 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:20.711 "is_configured": true, 00:15:20.711 "data_offset": 2048, 00:15:20.711 "data_size": 63488 00:15:20.711 }, 00:15:20.711 { 00:15:20.711 "name": null, 00:15:20.711 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:20.711 "is_configured": false, 00:15:20.711 "data_offset": 2048, 00:15:20.711 "data_size": 63488 00:15:20.711 } 00:15:20.711 ] 00:15:20.711 }' 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:20.711 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.971 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:20.971 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:15:20.971 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.971 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.232 [2024-12-12 05:53:28.512744] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:21.232 [2024-12-12 05:53:28.512845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:21.232 [2024-12-12 05:53:28.512882] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:21.232 [2024-12-12 05:53:28.512910] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:21.232 [2024-12-12 05:53:28.513408] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:21.232 [2024-12-12 05:53:28.513465] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:21.232 [2024-12-12 05:53:28.513590] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:21.232 [2024-12-12 05:53:28.513644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:21.232 [2024-12-12 05:53:28.513819] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:15:21.232 [2024-12-12 05:53:28.513857] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:21.232 [2024-12-12 05:53:28.514137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:21.232 [2024-12-12 05:53:28.519296] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:15:21.232 [2024-12-12 05:53:28.519359] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:15:21.232 [2024-12-12 05:53:28.519650] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:21.232 pt3 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.232 "name": "raid_bdev1", 00:15:21.232 "uuid": "bc09c853-8b9e-4b8b-8502-948e52e4a3d8", 00:15:21.232 "strip_size_kb": 64, 00:15:21.232 "state": "online", 00:15:21.232 "raid_level": "raid5f", 00:15:21.232 "superblock": true, 00:15:21.232 "num_base_bdevs": 3, 00:15:21.232 "num_base_bdevs_discovered": 2, 00:15:21.232 "num_base_bdevs_operational": 2, 00:15:21.232 "base_bdevs_list": [ 00:15:21.232 { 00:15:21.232 "name": null, 00:15:21.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.232 "is_configured": false, 00:15:21.232 "data_offset": 2048, 00:15:21.232 "data_size": 63488 00:15:21.232 }, 00:15:21.232 { 00:15:21.232 "name": "pt2", 00:15:21.232 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:21.232 "is_configured": true, 00:15:21.232 "data_offset": 2048, 00:15:21.232 "data_size": 63488 00:15:21.232 }, 00:15:21.232 { 00:15:21.232 "name": "pt3", 00:15:21.232 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:21.232 "is_configured": true, 00:15:21.232 "data_offset": 2048, 00:15:21.232 "data_size": 63488 00:15:21.232 } 00:15:21.232 ] 00:15:21.232 }' 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.232 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.491 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:15:21.491 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.491 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:15:21.492 [2024-12-12 05:53:28.981834] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:21.492 05:53:28 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bc09c853-8b9e-4b8b-8502-948e52e4a3d8 '!=' bc09c853-8b9e-4b8b-8502-948e52e4a3d8 ']' 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81403 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81403 ']' 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81403 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81403 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81403' 00:15:21.752 killing process with pid 81403 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81403 00:15:21.752 [2024-12-12 05:53:29.063812] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:21.752 [2024-12-12 05:53:29.063886] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:21.752 [2024-12-12 05:53:29.063941] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:21.752 [2024-12-12 05:53:29.063951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:15:21.752 05:53:29 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81403 00:15:22.012 [2024-12-12 05:53:29.344361] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.951 05:53:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:15:22.951 00:15:22.951 real 0m7.324s 00:15:22.951 user 0m11.440s 00:15:22.951 sys 0m1.264s 00:15:22.951 05:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.951 05:53:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.951 ************************************ 00:15:22.951 END TEST raid5f_superblock_test 00:15:22.951 ************************************ 00:15:22.951 05:53:30 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:15:22.951 05:53:30 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:15:22.951 05:53:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:22.951 05:53:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.951 05:53:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.951 ************************************ 00:15:22.951 START TEST raid5f_rebuild_test 00:15:22.951 ************************************ 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:22.951 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:22.952 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81798 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81798 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81798 ']' 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.212 05:53:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.212 [2024-12-12 05:53:30.555562] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:15:23.212 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:23.212 Zero copy mechanism will not be used. 00:15:23.212 [2024-12-12 05:53:30.555727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81798 ] 00:15:23.212 [2024-12-12 05:53:30.726332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.471 [2024-12-12 05:53:30.832522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.730 [2024-12-12 05:53:31.021588] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.730 [2024-12-12 05:53:31.021721] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.989 BaseBdev1_malloc 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.989 [2024-12-12 05:53:31.436378] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:23.989 [2024-12-12 05:53:31.436492] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.989 [2024-12-12 05:53:31.436542] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:23.989 [2024-12-12 05:53:31.436573] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.989 [2024-12-12 05:53:31.438730] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.989 [2024-12-12 05:53:31.438832] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:23.989 BaseBdev1 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.989 BaseBdev2_malloc 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:23.989 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.990 [2024-12-12 05:53:31.490425] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:23.990 [2024-12-12 05:53:31.490500] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:23.990 [2024-12-12 05:53:31.490532] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:23.990 [2024-12-12 05:53:31.490544] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:23.990 [2024-12-12 05:53:31.492505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:23.990 [2024-12-12 05:53:31.492550] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:23.990 BaseBdev2 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.990 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.249 BaseBdev3_malloc 00:15:24.249 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.249 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:24.249 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.249 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.249 [2024-12-12 05:53:31.565450] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:24.250 [2024-12-12 05:53:31.565576] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.250 [2024-12-12 05:53:31.565602] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:24.250 [2024-12-12 05:53:31.565615] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.250 [2024-12-12 05:53:31.567653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.250 [2024-12-12 05:53:31.567691] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:24.250 BaseBdev3 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.250 spare_malloc 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.250 spare_delay 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.250 [2024-12-12 05:53:31.632112] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:24.250 [2024-12-12 05:53:31.632205] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.250 [2024-12-12 05:53:31.632228] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:24.250 [2024-12-12 05:53:31.632238] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.250 [2024-12-12 05:53:31.634250] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.250 [2024-12-12 05:53:31.634287] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:24.250 spare 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.250 [2024-12-12 05:53:31.644157] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:24.250 [2024-12-12 05:53:31.645855] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:24.250 [2024-12-12 05:53:31.645916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:24.250 [2024-12-12 05:53:31.645997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:24.250 [2024-12-12 05:53:31.646020] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:24.250 [2024-12-12 05:53:31.646248] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:24.250 [2024-12-12 05:53:31.651752] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:24.250 [2024-12-12 05:53:31.651775] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:24.250 [2024-12-12 05:53:31.651954] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:24.250 "name": "raid_bdev1", 00:15:24.250 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:24.250 "strip_size_kb": 64, 00:15:24.250 "state": "online", 00:15:24.250 "raid_level": "raid5f", 00:15:24.250 "superblock": false, 00:15:24.250 "num_base_bdevs": 3, 00:15:24.250 "num_base_bdevs_discovered": 3, 00:15:24.250 "num_base_bdevs_operational": 3, 00:15:24.250 "base_bdevs_list": [ 00:15:24.250 { 00:15:24.250 "name": "BaseBdev1", 00:15:24.250 "uuid": "fa9e1bb7-0e8a-5cd5-b708-77ea7db71509", 00:15:24.250 "is_configured": true, 00:15:24.250 "data_offset": 0, 00:15:24.250 "data_size": 65536 00:15:24.250 }, 00:15:24.250 { 00:15:24.250 "name": "BaseBdev2", 00:15:24.250 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:24.250 "is_configured": true, 00:15:24.250 "data_offset": 0, 00:15:24.250 "data_size": 65536 00:15:24.250 }, 00:15:24.250 { 00:15:24.250 "name": "BaseBdev3", 00:15:24.250 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:24.250 "is_configured": true, 00:15:24.250 "data_offset": 0, 00:15:24.250 "data_size": 65536 00:15:24.250 } 00:15:24.250 ] 00:15:24.250 }' 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:24.250 05:53:31 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.510 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:24.510 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:24.510 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.510 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.510 [2024-12-12 05:53:32.025806] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:24.770 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:25.030 [2024-12-12 05:53:32.325279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:25.030 /dev/nbd0 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:25.030 1+0 records in 00:15:25.030 1+0 records out 00:15:25.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000329574 s, 12.4 MB/s 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:25.030 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:15:25.289 512+0 records in 00:15:25.289 512+0 records out 00:15:25.289 67108864 bytes (67 MB, 64 MiB) copied, 0.406055 s, 165 MB/s 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:25.289 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:25.549 05:53:32 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:25.549 [2024-12-12 05:53:32.994335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.549 [2024-12-12 05:53:33.030136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:25.549 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.550 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.809 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:25.809 "name": "raid_bdev1", 00:15:25.809 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:25.809 "strip_size_kb": 64, 00:15:25.809 "state": "online", 00:15:25.809 "raid_level": "raid5f", 00:15:25.809 "superblock": false, 00:15:25.809 "num_base_bdevs": 3, 00:15:25.809 "num_base_bdevs_discovered": 2, 00:15:25.809 "num_base_bdevs_operational": 2, 00:15:25.809 "base_bdevs_list": [ 00:15:25.809 { 00:15:25.809 "name": null, 00:15:25.809 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.809 "is_configured": false, 00:15:25.809 "data_offset": 0, 00:15:25.809 "data_size": 65536 00:15:25.809 }, 00:15:25.809 { 00:15:25.809 "name": "BaseBdev2", 00:15:25.809 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:25.809 "is_configured": true, 00:15:25.809 "data_offset": 0, 00:15:25.809 "data_size": 65536 00:15:25.809 }, 00:15:25.809 { 00:15:25.809 "name": "BaseBdev3", 00:15:25.809 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:25.809 "is_configured": true, 00:15:25.809 "data_offset": 0, 00:15:25.809 "data_size": 65536 00:15:25.809 } 00:15:25.809 ] 00:15:25.809 }' 00:15:25.809 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:25.809 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.069 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:26.069 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.069 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.069 [2024-12-12 05:53:33.481321] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:26.069 [2024-12-12 05:53:33.499459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:15:26.069 05:53:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.069 05:53:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:26.069 [2024-12-12 05:53:33.507382] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.008 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.268 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.268 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.268 "name": "raid_bdev1", 00:15:27.268 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:27.268 "strip_size_kb": 64, 00:15:27.268 "state": "online", 00:15:27.268 "raid_level": "raid5f", 00:15:27.268 "superblock": false, 00:15:27.268 "num_base_bdevs": 3, 00:15:27.268 "num_base_bdevs_discovered": 3, 00:15:27.268 "num_base_bdevs_operational": 3, 00:15:27.268 "process": { 00:15:27.268 "type": "rebuild", 00:15:27.268 "target": "spare", 00:15:27.268 "progress": { 00:15:27.268 "blocks": 20480, 00:15:27.268 "percent": 15 00:15:27.268 } 00:15:27.268 }, 00:15:27.268 "base_bdevs_list": [ 00:15:27.268 { 00:15:27.269 "name": "spare", 00:15:27.269 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:27.269 "is_configured": true, 00:15:27.269 "data_offset": 0, 00:15:27.269 "data_size": 65536 00:15:27.269 }, 00:15:27.269 { 00:15:27.269 "name": "BaseBdev2", 00:15:27.269 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:27.269 "is_configured": true, 00:15:27.269 "data_offset": 0, 00:15:27.269 "data_size": 65536 00:15:27.269 }, 00:15:27.269 { 00:15:27.269 "name": "BaseBdev3", 00:15:27.269 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:27.269 "is_configured": true, 00:15:27.269 "data_offset": 0, 00:15:27.269 "data_size": 65536 00:15:27.269 } 00:15:27.269 ] 00:15:27.269 }' 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.269 [2024-12-12 05:53:34.654723] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.269 [2024-12-12 05:53:34.717688] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:27.269 [2024-12-12 05:53:34.717750] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.269 [2024-12-12 05:53:34.717770] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:27.269 [2024-12-12 05:53:34.717778] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.269 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.541 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:27.541 "name": "raid_bdev1", 00:15:27.541 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:27.541 "strip_size_kb": 64, 00:15:27.541 "state": "online", 00:15:27.541 "raid_level": "raid5f", 00:15:27.541 "superblock": false, 00:15:27.541 "num_base_bdevs": 3, 00:15:27.541 "num_base_bdevs_discovered": 2, 00:15:27.541 "num_base_bdevs_operational": 2, 00:15:27.541 "base_bdevs_list": [ 00:15:27.541 { 00:15:27.541 "name": null, 00:15:27.541 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.541 "is_configured": false, 00:15:27.541 "data_offset": 0, 00:15:27.541 "data_size": 65536 00:15:27.541 }, 00:15:27.541 { 00:15:27.541 "name": "BaseBdev2", 00:15:27.541 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:27.541 "is_configured": true, 00:15:27.541 "data_offset": 0, 00:15:27.541 "data_size": 65536 00:15:27.541 }, 00:15:27.541 { 00:15:27.541 "name": "BaseBdev3", 00:15:27.541 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:27.541 "is_configured": true, 00:15:27.541 "data_offset": 0, 00:15:27.541 "data_size": 65536 00:15:27.541 } 00:15:27.541 ] 00:15:27.541 }' 00:15:27.541 05:53:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:27.541 05:53:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:27.859 "name": "raid_bdev1", 00:15:27.859 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:27.859 "strip_size_kb": 64, 00:15:27.859 "state": "online", 00:15:27.859 "raid_level": "raid5f", 00:15:27.859 "superblock": false, 00:15:27.859 "num_base_bdevs": 3, 00:15:27.859 "num_base_bdevs_discovered": 2, 00:15:27.859 "num_base_bdevs_operational": 2, 00:15:27.859 "base_bdevs_list": [ 00:15:27.859 { 00:15:27.859 "name": null, 00:15:27.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.859 "is_configured": false, 00:15:27.859 "data_offset": 0, 00:15:27.859 "data_size": 65536 00:15:27.859 }, 00:15:27.859 { 00:15:27.859 "name": "BaseBdev2", 00:15:27.859 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:27.859 "is_configured": true, 00:15:27.859 "data_offset": 0, 00:15:27.859 "data_size": 65536 00:15:27.859 }, 00:15:27.859 { 00:15:27.859 "name": "BaseBdev3", 00:15:27.859 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:27.859 "is_configured": true, 00:15:27.859 "data_offset": 0, 00:15:27.859 "data_size": 65536 00:15:27.859 } 00:15:27.859 ] 00:15:27.859 }' 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:27.859 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.119 [2024-12-12 05:53:35.356876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.119 [2024-12-12 05:53:35.373633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.119 05:53:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:28.119 [2024-12-12 05:53:35.381482] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.058 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.058 "name": "raid_bdev1", 00:15:29.058 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:29.058 "strip_size_kb": 64, 00:15:29.058 "state": "online", 00:15:29.058 "raid_level": "raid5f", 00:15:29.058 "superblock": false, 00:15:29.058 "num_base_bdevs": 3, 00:15:29.058 "num_base_bdevs_discovered": 3, 00:15:29.058 "num_base_bdevs_operational": 3, 00:15:29.058 "process": { 00:15:29.058 "type": "rebuild", 00:15:29.058 "target": "spare", 00:15:29.058 "progress": { 00:15:29.058 "blocks": 20480, 00:15:29.058 "percent": 15 00:15:29.058 } 00:15:29.058 }, 00:15:29.058 "base_bdevs_list": [ 00:15:29.058 { 00:15:29.058 "name": "spare", 00:15:29.058 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev2", 00:15:29.058 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.058 "data_size": 65536 00:15:29.058 }, 00:15:29.058 { 00:15:29.058 "name": "BaseBdev3", 00:15:29.058 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:29.058 "is_configured": true, 00:15:29.058 "data_offset": 0, 00:15:29.059 "data_size": 65536 00:15:29.059 } 00:15:29.059 ] 00:15:29.059 }' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=530 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.059 "name": "raid_bdev1", 00:15:29.059 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:29.059 "strip_size_kb": 64, 00:15:29.059 "state": "online", 00:15:29.059 "raid_level": "raid5f", 00:15:29.059 "superblock": false, 00:15:29.059 "num_base_bdevs": 3, 00:15:29.059 "num_base_bdevs_discovered": 3, 00:15:29.059 "num_base_bdevs_operational": 3, 00:15:29.059 "process": { 00:15:29.059 "type": "rebuild", 00:15:29.059 "target": "spare", 00:15:29.059 "progress": { 00:15:29.059 "blocks": 22528, 00:15:29.059 "percent": 17 00:15:29.059 } 00:15:29.059 }, 00:15:29.059 "base_bdevs_list": [ 00:15:29.059 { 00:15:29.059 "name": "spare", 00:15:29.059 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:29.059 "is_configured": true, 00:15:29.059 "data_offset": 0, 00:15:29.059 "data_size": 65536 00:15:29.059 }, 00:15:29.059 { 00:15:29.059 "name": "BaseBdev2", 00:15:29.059 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:29.059 "is_configured": true, 00:15:29.059 "data_offset": 0, 00:15:29.059 "data_size": 65536 00:15:29.059 }, 00:15:29.059 { 00:15:29.059 "name": "BaseBdev3", 00:15:29.059 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:29.059 "is_configured": true, 00:15:29.059 "data_offset": 0, 00:15:29.059 "data_size": 65536 00:15:29.059 } 00:15:29.059 ] 00:15:29.059 }' 00:15:29.059 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.318 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:29.318 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.318 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.318 05:53:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.258 "name": "raid_bdev1", 00:15:30.258 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:30.258 "strip_size_kb": 64, 00:15:30.258 "state": "online", 00:15:30.258 "raid_level": "raid5f", 00:15:30.258 "superblock": false, 00:15:30.258 "num_base_bdevs": 3, 00:15:30.258 "num_base_bdevs_discovered": 3, 00:15:30.258 "num_base_bdevs_operational": 3, 00:15:30.258 "process": { 00:15:30.258 "type": "rebuild", 00:15:30.258 "target": "spare", 00:15:30.258 "progress": { 00:15:30.258 "blocks": 45056, 00:15:30.258 "percent": 34 00:15:30.258 } 00:15:30.258 }, 00:15:30.258 "base_bdevs_list": [ 00:15:30.258 { 00:15:30.258 "name": "spare", 00:15:30.258 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:30.258 "is_configured": true, 00:15:30.258 "data_offset": 0, 00:15:30.258 "data_size": 65536 00:15:30.258 }, 00:15:30.258 { 00:15:30.258 "name": "BaseBdev2", 00:15:30.258 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:30.258 "is_configured": true, 00:15:30.258 "data_offset": 0, 00:15:30.258 "data_size": 65536 00:15:30.258 }, 00:15:30.258 { 00:15:30.258 "name": "BaseBdev3", 00:15:30.258 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:30.258 "is_configured": true, 00:15:30.258 "data_offset": 0, 00:15:30.258 "data_size": 65536 00:15:30.258 } 00:15:30.258 ] 00:15:30.258 }' 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.258 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.518 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.518 05:53:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.487 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:31.487 "name": "raid_bdev1", 00:15:31.487 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:31.487 "strip_size_kb": 64, 00:15:31.487 "state": "online", 00:15:31.487 "raid_level": "raid5f", 00:15:31.487 "superblock": false, 00:15:31.487 "num_base_bdevs": 3, 00:15:31.487 "num_base_bdevs_discovered": 3, 00:15:31.487 "num_base_bdevs_operational": 3, 00:15:31.487 "process": { 00:15:31.487 "type": "rebuild", 00:15:31.487 "target": "spare", 00:15:31.487 "progress": { 00:15:31.487 "blocks": 67584, 00:15:31.487 "percent": 51 00:15:31.487 } 00:15:31.487 }, 00:15:31.487 "base_bdevs_list": [ 00:15:31.487 { 00:15:31.487 "name": "spare", 00:15:31.487 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:31.487 "is_configured": true, 00:15:31.487 "data_offset": 0, 00:15:31.487 "data_size": 65536 00:15:31.487 }, 00:15:31.487 { 00:15:31.487 "name": "BaseBdev2", 00:15:31.487 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:31.487 "is_configured": true, 00:15:31.487 "data_offset": 0, 00:15:31.487 "data_size": 65536 00:15:31.487 }, 00:15:31.488 { 00:15:31.488 "name": "BaseBdev3", 00:15:31.488 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:31.488 "is_configured": true, 00:15:31.488 "data_offset": 0, 00:15:31.488 "data_size": 65536 00:15:31.488 } 00:15:31.488 ] 00:15:31.488 }' 00:15:31.488 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:31.488 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:31.488 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:31.488 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:31.488 05:53:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.427 05:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.687 05:53:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.687 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.687 "name": "raid_bdev1", 00:15:32.687 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:32.687 "strip_size_kb": 64, 00:15:32.687 "state": "online", 00:15:32.687 "raid_level": "raid5f", 00:15:32.687 "superblock": false, 00:15:32.687 "num_base_bdevs": 3, 00:15:32.687 "num_base_bdevs_discovered": 3, 00:15:32.687 "num_base_bdevs_operational": 3, 00:15:32.687 "process": { 00:15:32.687 "type": "rebuild", 00:15:32.687 "target": "spare", 00:15:32.687 "progress": { 00:15:32.687 "blocks": 92160, 00:15:32.687 "percent": 70 00:15:32.687 } 00:15:32.687 }, 00:15:32.687 "base_bdevs_list": [ 00:15:32.687 { 00:15:32.687 "name": "spare", 00:15:32.687 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:32.687 "is_configured": true, 00:15:32.687 "data_offset": 0, 00:15:32.687 "data_size": 65536 00:15:32.687 }, 00:15:32.687 { 00:15:32.687 "name": "BaseBdev2", 00:15:32.687 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:32.687 "is_configured": true, 00:15:32.687 "data_offset": 0, 00:15:32.687 "data_size": 65536 00:15:32.687 }, 00:15:32.687 { 00:15:32.687 "name": "BaseBdev3", 00:15:32.687 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:32.687 "is_configured": true, 00:15:32.687 "data_offset": 0, 00:15:32.687 "data_size": 65536 00:15:32.687 } 00:15:32.687 ] 00:15:32.687 }' 00:15:32.687 05:53:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.687 05:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.687 05:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.687 05:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.687 05:53:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.631 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.631 "name": "raid_bdev1", 00:15:33.631 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:33.631 "strip_size_kb": 64, 00:15:33.631 "state": "online", 00:15:33.631 "raid_level": "raid5f", 00:15:33.631 "superblock": false, 00:15:33.631 "num_base_bdevs": 3, 00:15:33.631 "num_base_bdevs_discovered": 3, 00:15:33.631 "num_base_bdevs_operational": 3, 00:15:33.631 "process": { 00:15:33.631 "type": "rebuild", 00:15:33.631 "target": "spare", 00:15:33.631 "progress": { 00:15:33.631 "blocks": 114688, 00:15:33.631 "percent": 87 00:15:33.631 } 00:15:33.631 }, 00:15:33.631 "base_bdevs_list": [ 00:15:33.631 { 00:15:33.631 "name": "spare", 00:15:33.631 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:33.631 "is_configured": true, 00:15:33.631 "data_offset": 0, 00:15:33.631 "data_size": 65536 00:15:33.631 }, 00:15:33.631 { 00:15:33.631 "name": "BaseBdev2", 00:15:33.631 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:33.631 "is_configured": true, 00:15:33.631 "data_offset": 0, 00:15:33.631 "data_size": 65536 00:15:33.631 }, 00:15:33.631 { 00:15:33.631 "name": "BaseBdev3", 00:15:33.631 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:33.631 "is_configured": true, 00:15:33.631 "data_offset": 0, 00:15:33.632 "data_size": 65536 00:15:33.632 } 00:15:33.632 ] 00:15:33.632 }' 00:15:33.632 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.891 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:33.891 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.891 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:33.891 05:53:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:34.461 [2024-12-12 05:53:41.832842] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:34.461 [2024-12-12 05:53:41.833014] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:34.461 [2024-12-12 05:53:41.833080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.720 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.980 "name": "raid_bdev1", 00:15:34.980 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:34.980 "strip_size_kb": 64, 00:15:34.980 "state": "online", 00:15:34.980 "raid_level": "raid5f", 00:15:34.980 "superblock": false, 00:15:34.980 "num_base_bdevs": 3, 00:15:34.980 "num_base_bdevs_discovered": 3, 00:15:34.980 "num_base_bdevs_operational": 3, 00:15:34.980 "base_bdevs_list": [ 00:15:34.980 { 00:15:34.980 "name": "spare", 00:15:34.980 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:34.980 "is_configured": true, 00:15:34.980 "data_offset": 0, 00:15:34.980 "data_size": 65536 00:15:34.980 }, 00:15:34.980 { 00:15:34.980 "name": "BaseBdev2", 00:15:34.980 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:34.980 "is_configured": true, 00:15:34.980 "data_offset": 0, 00:15:34.980 "data_size": 65536 00:15:34.980 }, 00:15:34.980 { 00:15:34.980 "name": "BaseBdev3", 00:15:34.980 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:34.980 "is_configured": true, 00:15:34.980 "data_offset": 0, 00:15:34.980 "data_size": 65536 00:15:34.980 } 00:15:34.980 ] 00:15:34.980 }' 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.980 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.981 "name": "raid_bdev1", 00:15:34.981 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:34.981 "strip_size_kb": 64, 00:15:34.981 "state": "online", 00:15:34.981 "raid_level": "raid5f", 00:15:34.981 "superblock": false, 00:15:34.981 "num_base_bdevs": 3, 00:15:34.981 "num_base_bdevs_discovered": 3, 00:15:34.981 "num_base_bdevs_operational": 3, 00:15:34.981 "base_bdevs_list": [ 00:15:34.981 { 00:15:34.981 "name": "spare", 00:15:34.981 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:34.981 "is_configured": true, 00:15:34.981 "data_offset": 0, 00:15:34.981 "data_size": 65536 00:15:34.981 }, 00:15:34.981 { 00:15:34.981 "name": "BaseBdev2", 00:15:34.981 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:34.981 "is_configured": true, 00:15:34.981 "data_offset": 0, 00:15:34.981 "data_size": 65536 00:15:34.981 }, 00:15:34.981 { 00:15:34.981 "name": "BaseBdev3", 00:15:34.981 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:34.981 "is_configured": true, 00:15:34.981 "data_offset": 0, 00:15:34.981 "data_size": 65536 00:15:34.981 } 00:15:34.981 ] 00:15:34.981 }' 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.981 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.241 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.241 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:35.241 "name": "raid_bdev1", 00:15:35.241 "uuid": "cce287e7-2b2e-49ac-a473-9660b6ac275d", 00:15:35.241 "strip_size_kb": 64, 00:15:35.241 "state": "online", 00:15:35.241 "raid_level": "raid5f", 00:15:35.241 "superblock": false, 00:15:35.241 "num_base_bdevs": 3, 00:15:35.241 "num_base_bdevs_discovered": 3, 00:15:35.241 "num_base_bdevs_operational": 3, 00:15:35.241 "base_bdevs_list": [ 00:15:35.241 { 00:15:35.241 "name": "spare", 00:15:35.241 "uuid": "28e5aa2f-e4e8-50c3-b096-29f2a12456ca", 00:15:35.241 "is_configured": true, 00:15:35.241 "data_offset": 0, 00:15:35.241 "data_size": 65536 00:15:35.241 }, 00:15:35.241 { 00:15:35.241 "name": "BaseBdev2", 00:15:35.241 "uuid": "8b527875-c60c-5a09-9070-3c031250a695", 00:15:35.241 "is_configured": true, 00:15:35.241 "data_offset": 0, 00:15:35.241 "data_size": 65536 00:15:35.241 }, 00:15:35.241 { 00:15:35.241 "name": "BaseBdev3", 00:15:35.241 "uuid": "568c0693-f8da-5e13-8f2a-e26c89550298", 00:15:35.241 "is_configured": true, 00:15:35.241 "data_offset": 0, 00:15:35.241 "data_size": 65536 00:15:35.241 } 00:15:35.241 ] 00:15:35.241 }' 00:15:35.241 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:35.241 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.501 [2024-12-12 05:53:42.981938] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:35.501 [2024-12-12 05:53:42.982028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.501 [2024-12-12 05:53:42.982169] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.501 [2024-12-12 05:53:42.982288] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.501 [2024-12-12 05:53:42.982338] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:35.501 05:53:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:35.761 /dev/nbd0 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.761 1+0 records in 00:15:35.761 1+0 records out 00:15:35.761 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275648 s, 14.9 MB/s 00:15:35.761 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:36.020 /dev/nbd1 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.020 1+0 records in 00:15:36.020 1+0 records out 00:15:36.020 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023621 s, 17.3 MB/s 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:36.020 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.279 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:36.538 05:53:43 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81798 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81798 ']' 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81798 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81798 00:15:36.797 killing process with pid 81798 00:15:36.797 Received shutdown signal, test time was about 60.000000 seconds 00:15:36.797 00:15:36.797 Latency(us) 00:15:36.797 [2024-12-12T05:53:44.319Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.797 [2024-12-12T05:53:44.319Z] =================================================================================================================== 00:15:36.797 [2024-12-12T05:53:44.319Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81798' 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81798 00:15:36.797 [2024-12-12 05:53:44.151267] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.797 05:53:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81798 00:15:37.056 [2024-12-12 05:53:44.520598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.435 00:15:38.435 real 0m15.095s 00:15:38.435 user 0m18.463s 00:15:38.435 sys 0m1.996s 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.435 ************************************ 00:15:38.435 END TEST raid5f_rebuild_test 00:15:38.435 ************************************ 00:15:38.435 05:53:45 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:15:38.435 05:53:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:38.435 05:53:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:38.435 05:53:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.435 ************************************ 00:15:38.435 START TEST raid5f_rebuild_test_sb 00:15:38.435 ************************************ 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82144 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82144 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82144 ']' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:38.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:38.435 05:53:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:38.435 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:38.435 Zero copy mechanism will not be used. 00:15:38.435 [2024-12-12 05:53:45.721562] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:15:38.435 [2024-12-12 05:53:45.721692] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82144 ] 00:15:38.435 [2024-12-12 05:53:45.893866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.695 [2024-12-12 05:53:45.992938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:38.695 [2024-12-12 05:53:46.178983] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:38.695 [2024-12-12 05:53:46.179020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.265 BaseBdev1_malloc 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.265 [2024-12-12 05:53:46.573311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:39.265 [2024-12-12 05:53:46.573374] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.265 [2024-12-12 05:53:46.573412] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:39.265 [2024-12-12 05:53:46.573423] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.265 [2024-12-12 05:53:46.575468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.265 [2024-12-12 05:53:46.575522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:39.265 BaseBdev1 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.265 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.265 BaseBdev2_malloc 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 [2024-12-12 05:53:46.622987] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:39.266 [2024-12-12 05:53:46.623052] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.266 [2024-12-12 05:53:46.623071] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:39.266 [2024-12-12 05:53:46.623082] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.266 [2024-12-12 05:53:46.625065] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.266 [2024-12-12 05:53:46.625100] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:39.266 BaseBdev2 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 BaseBdev3_malloc 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 [2024-12-12 05:53:46.707215] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:39.266 [2024-12-12 05:53:46.707273] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.266 [2024-12-12 05:53:46.707309] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:39.266 [2024-12-12 05:53:46.707319] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.266 [2024-12-12 05:53:46.709310] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.266 [2024-12-12 05:53:46.709348] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:39.266 BaseBdev3 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 spare_malloc 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 spare_delay 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 [2024-12-12 05:53:46.767386] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:39.266 [2024-12-12 05:53:46.767443] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:39.266 [2024-12-12 05:53:46.767478] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:15:39.266 [2024-12-12 05:53:46.767488] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:39.266 [2024-12-12 05:53:46.769552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:39.266 [2024-12-12 05:53:46.769592] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:39.266 spare 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.266 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.266 [2024-12-12 05:53:46.779433] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.266 [2024-12-12 05:53:46.781163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:39.266 [2024-12-12 05:53:46.781243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:39.266 [2024-12-12 05:53:46.781412] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:39.266 [2024-12-12 05:53:46.781423] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:39.266 [2024-12-12 05:53:46.781710] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:15:39.526 [2024-12-12 05:53:46.786682] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:39.526 [2024-12-12 05:53:46.786726] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:39.526 [2024-12-12 05:53:46.786905] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.526 "name": "raid_bdev1", 00:15:39.526 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:39.526 "strip_size_kb": 64, 00:15:39.526 "state": "online", 00:15:39.526 "raid_level": "raid5f", 00:15:39.526 "superblock": true, 00:15:39.526 "num_base_bdevs": 3, 00:15:39.526 "num_base_bdevs_discovered": 3, 00:15:39.526 "num_base_bdevs_operational": 3, 00:15:39.526 "base_bdevs_list": [ 00:15:39.526 { 00:15:39.526 "name": "BaseBdev1", 00:15:39.526 "uuid": "916b0d79-b888-519a-a17a-19e13156461a", 00:15:39.526 "is_configured": true, 00:15:39.526 "data_offset": 2048, 00:15:39.526 "data_size": 63488 00:15:39.526 }, 00:15:39.526 { 00:15:39.526 "name": "BaseBdev2", 00:15:39.526 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:39.526 "is_configured": true, 00:15:39.526 "data_offset": 2048, 00:15:39.526 "data_size": 63488 00:15:39.526 }, 00:15:39.526 { 00:15:39.526 "name": "BaseBdev3", 00:15:39.526 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:39.526 "is_configured": true, 00:15:39.526 "data_offset": 2048, 00:15:39.526 "data_size": 63488 00:15:39.526 } 00:15:39.526 ] 00:15:39.526 }' 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.526 05:53:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:39.786 [2024-12-12 05:53:47.240475] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:39.786 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:40.046 [2024-12-12 05:53:47.515856] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:40.046 /dev/nbd0 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:40.046 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:40.307 1+0 records in 00:15:40.307 1+0 records out 00:15:40.307 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411324 s, 10.0 MB/s 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:15:40.307 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:15:40.566 496+0 records in 00:15:40.566 496+0 records out 00:15:40.566 65011712 bytes (65 MB, 62 MiB) copied, 0.350443 s, 186 MB/s 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.566 05:53:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:40.826 [2024-12-12 05:53:48.147607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.826 [2024-12-12 05:53:48.163619] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.826 "name": "raid_bdev1", 00:15:40.826 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:40.826 "strip_size_kb": 64, 00:15:40.826 "state": "online", 00:15:40.826 "raid_level": "raid5f", 00:15:40.826 "superblock": true, 00:15:40.826 "num_base_bdevs": 3, 00:15:40.826 "num_base_bdevs_discovered": 2, 00:15:40.826 "num_base_bdevs_operational": 2, 00:15:40.826 "base_bdevs_list": [ 00:15:40.826 { 00:15:40.826 "name": null, 00:15:40.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.826 "is_configured": false, 00:15:40.826 "data_offset": 0, 00:15:40.826 "data_size": 63488 00:15:40.826 }, 00:15:40.826 { 00:15:40.826 "name": "BaseBdev2", 00:15:40.826 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:40.826 "is_configured": true, 00:15:40.826 "data_offset": 2048, 00:15:40.826 "data_size": 63488 00:15:40.826 }, 00:15:40.826 { 00:15:40.826 "name": "BaseBdev3", 00:15:40.826 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:40.826 "is_configured": true, 00:15:40.826 "data_offset": 2048, 00:15:40.826 "data_size": 63488 00:15:40.826 } 00:15:40.826 ] 00:15:40.826 }' 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.826 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.086 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:41.086 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.086 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:41.086 [2024-12-12 05:53:48.598868] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:41.345 [2024-12-12 05:53:48.616339] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:15:41.345 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.345 05:53:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:41.345 [2024-12-12 05:53:48.624219] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.284 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.284 "name": "raid_bdev1", 00:15:42.284 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:42.284 "strip_size_kb": 64, 00:15:42.284 "state": "online", 00:15:42.284 "raid_level": "raid5f", 00:15:42.284 "superblock": true, 00:15:42.284 "num_base_bdevs": 3, 00:15:42.284 "num_base_bdevs_discovered": 3, 00:15:42.284 "num_base_bdevs_operational": 3, 00:15:42.284 "process": { 00:15:42.284 "type": "rebuild", 00:15:42.284 "target": "spare", 00:15:42.284 "progress": { 00:15:42.284 "blocks": 20480, 00:15:42.284 "percent": 16 00:15:42.284 } 00:15:42.284 }, 00:15:42.284 "base_bdevs_list": [ 00:15:42.284 { 00:15:42.284 "name": "spare", 00:15:42.284 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:42.285 "is_configured": true, 00:15:42.285 "data_offset": 2048, 00:15:42.285 "data_size": 63488 00:15:42.285 }, 00:15:42.285 { 00:15:42.285 "name": "BaseBdev2", 00:15:42.285 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:42.285 "is_configured": true, 00:15:42.285 "data_offset": 2048, 00:15:42.285 "data_size": 63488 00:15:42.285 }, 00:15:42.285 { 00:15:42.285 "name": "BaseBdev3", 00:15:42.285 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:42.285 "is_configured": true, 00:15:42.285 "data_offset": 2048, 00:15:42.285 "data_size": 63488 00:15:42.285 } 00:15:42.285 ] 00:15:42.285 }' 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.285 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.285 [2024-12-12 05:53:49.755734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.545 [2024-12-12 05:53:49.831804] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:42.545 [2024-12-12 05:53:49.831882] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.545 [2024-12-12 05:53:49.831901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:42.545 [2024-12-12 05:53:49.831909] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.545 "name": "raid_bdev1", 00:15:42.545 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:42.545 "strip_size_kb": 64, 00:15:42.545 "state": "online", 00:15:42.545 "raid_level": "raid5f", 00:15:42.545 "superblock": true, 00:15:42.545 "num_base_bdevs": 3, 00:15:42.545 "num_base_bdevs_discovered": 2, 00:15:42.545 "num_base_bdevs_operational": 2, 00:15:42.545 "base_bdevs_list": [ 00:15:42.545 { 00:15:42.545 "name": null, 00:15:42.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.545 "is_configured": false, 00:15:42.545 "data_offset": 0, 00:15:42.545 "data_size": 63488 00:15:42.545 }, 00:15:42.545 { 00:15:42.545 "name": "BaseBdev2", 00:15:42.545 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:42.545 "is_configured": true, 00:15:42.545 "data_offset": 2048, 00:15:42.545 "data_size": 63488 00:15:42.545 }, 00:15:42.545 { 00:15:42.545 "name": "BaseBdev3", 00:15:42.545 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:42.545 "is_configured": true, 00:15:42.545 "data_offset": 2048, 00:15:42.545 "data_size": 63488 00:15:42.545 } 00:15:42.545 ] 00:15:42.545 }' 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.545 05:53:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:42.805 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:43.065 "name": "raid_bdev1", 00:15:43.065 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:43.065 "strip_size_kb": 64, 00:15:43.065 "state": "online", 00:15:43.065 "raid_level": "raid5f", 00:15:43.065 "superblock": true, 00:15:43.065 "num_base_bdevs": 3, 00:15:43.065 "num_base_bdevs_discovered": 2, 00:15:43.065 "num_base_bdevs_operational": 2, 00:15:43.065 "base_bdevs_list": [ 00:15:43.065 { 00:15:43.065 "name": null, 00:15:43.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.065 "is_configured": false, 00:15:43.065 "data_offset": 0, 00:15:43.065 "data_size": 63488 00:15:43.065 }, 00:15:43.065 { 00:15:43.065 "name": "BaseBdev2", 00:15:43.065 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:43.065 "is_configured": true, 00:15:43.065 "data_offset": 2048, 00:15:43.065 "data_size": 63488 00:15:43.065 }, 00:15:43.065 { 00:15:43.065 "name": "BaseBdev3", 00:15:43.065 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:43.065 "is_configured": true, 00:15:43.065 "data_offset": 2048, 00:15:43.065 "data_size": 63488 00:15:43.065 } 00:15:43.065 ] 00:15:43.065 }' 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:43.065 [2024-12-12 05:53:50.420651] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:43.065 [2024-12-12 05:53:50.436394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.065 05:53:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:43.066 [2024-12-12 05:53:50.443836] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.005 "name": "raid_bdev1", 00:15:44.005 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:44.005 "strip_size_kb": 64, 00:15:44.005 "state": "online", 00:15:44.005 "raid_level": "raid5f", 00:15:44.005 "superblock": true, 00:15:44.005 "num_base_bdevs": 3, 00:15:44.005 "num_base_bdevs_discovered": 3, 00:15:44.005 "num_base_bdevs_operational": 3, 00:15:44.005 "process": { 00:15:44.005 "type": "rebuild", 00:15:44.005 "target": "spare", 00:15:44.005 "progress": { 00:15:44.005 "blocks": 20480, 00:15:44.005 "percent": 16 00:15:44.005 } 00:15:44.005 }, 00:15:44.005 "base_bdevs_list": [ 00:15:44.005 { 00:15:44.005 "name": "spare", 00:15:44.005 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:44.005 "is_configured": true, 00:15:44.005 "data_offset": 2048, 00:15:44.005 "data_size": 63488 00:15:44.005 }, 00:15:44.005 { 00:15:44.005 "name": "BaseBdev2", 00:15:44.005 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:44.005 "is_configured": true, 00:15:44.005 "data_offset": 2048, 00:15:44.005 "data_size": 63488 00:15:44.005 }, 00:15:44.005 { 00:15:44.005 "name": "BaseBdev3", 00:15:44.005 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:44.005 "is_configured": true, 00:15:44.005 "data_offset": 2048, 00:15:44.005 "data_size": 63488 00:15:44.005 } 00:15:44.005 ] 00:15:44.005 }' 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.005 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:44.265 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=545 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:44.265 "name": "raid_bdev1", 00:15:44.265 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:44.265 "strip_size_kb": 64, 00:15:44.265 "state": "online", 00:15:44.265 "raid_level": "raid5f", 00:15:44.265 "superblock": true, 00:15:44.265 "num_base_bdevs": 3, 00:15:44.265 "num_base_bdevs_discovered": 3, 00:15:44.265 "num_base_bdevs_operational": 3, 00:15:44.265 "process": { 00:15:44.265 "type": "rebuild", 00:15:44.265 "target": "spare", 00:15:44.265 "progress": { 00:15:44.265 "blocks": 22528, 00:15:44.265 "percent": 17 00:15:44.265 } 00:15:44.265 }, 00:15:44.265 "base_bdevs_list": [ 00:15:44.265 { 00:15:44.265 "name": "spare", 00:15:44.265 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:44.265 "is_configured": true, 00:15:44.265 "data_offset": 2048, 00:15:44.265 "data_size": 63488 00:15:44.265 }, 00:15:44.265 { 00:15:44.265 "name": "BaseBdev2", 00:15:44.265 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:44.265 "is_configured": true, 00:15:44.265 "data_offset": 2048, 00:15:44.265 "data_size": 63488 00:15:44.265 }, 00:15:44.265 { 00:15:44.265 "name": "BaseBdev3", 00:15:44.265 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:44.265 "is_configured": true, 00:15:44.265 "data_offset": 2048, 00:15:44.265 "data_size": 63488 00:15:44.265 } 00:15:44.265 ] 00:15:44.265 }' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.265 05:53:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.205 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:45.475 "name": "raid_bdev1", 00:15:45.475 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:45.475 "strip_size_kb": 64, 00:15:45.475 "state": "online", 00:15:45.475 "raid_level": "raid5f", 00:15:45.475 "superblock": true, 00:15:45.475 "num_base_bdevs": 3, 00:15:45.475 "num_base_bdevs_discovered": 3, 00:15:45.475 "num_base_bdevs_operational": 3, 00:15:45.475 "process": { 00:15:45.475 "type": "rebuild", 00:15:45.475 "target": "spare", 00:15:45.475 "progress": { 00:15:45.475 "blocks": 45056, 00:15:45.475 "percent": 35 00:15:45.475 } 00:15:45.475 }, 00:15:45.475 "base_bdevs_list": [ 00:15:45.475 { 00:15:45.475 "name": "spare", 00:15:45.475 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:45.475 "is_configured": true, 00:15:45.475 "data_offset": 2048, 00:15:45.475 "data_size": 63488 00:15:45.475 }, 00:15:45.475 { 00:15:45.475 "name": "BaseBdev2", 00:15:45.475 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:45.475 "is_configured": true, 00:15:45.475 "data_offset": 2048, 00:15:45.475 "data_size": 63488 00:15:45.475 }, 00:15:45.475 { 00:15:45.475 "name": "BaseBdev3", 00:15:45.475 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:45.475 "is_configured": true, 00:15:45.475 "data_offset": 2048, 00:15:45.475 "data_size": 63488 00:15:45.475 } 00:15:45.475 ] 00:15:45.475 }' 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:45.475 05:53:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:46.450 "name": "raid_bdev1", 00:15:46.450 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:46.450 "strip_size_kb": 64, 00:15:46.450 "state": "online", 00:15:46.450 "raid_level": "raid5f", 00:15:46.450 "superblock": true, 00:15:46.450 "num_base_bdevs": 3, 00:15:46.450 "num_base_bdevs_discovered": 3, 00:15:46.450 "num_base_bdevs_operational": 3, 00:15:46.450 "process": { 00:15:46.450 "type": "rebuild", 00:15:46.450 "target": "spare", 00:15:46.450 "progress": { 00:15:46.450 "blocks": 69632, 00:15:46.450 "percent": 54 00:15:46.450 } 00:15:46.450 }, 00:15:46.450 "base_bdevs_list": [ 00:15:46.450 { 00:15:46.450 "name": "spare", 00:15:46.450 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:46.450 "is_configured": true, 00:15:46.450 "data_offset": 2048, 00:15:46.450 "data_size": 63488 00:15:46.450 }, 00:15:46.450 { 00:15:46.450 "name": "BaseBdev2", 00:15:46.450 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:46.450 "is_configured": true, 00:15:46.450 "data_offset": 2048, 00:15:46.450 "data_size": 63488 00:15:46.450 }, 00:15:46.450 { 00:15:46.450 "name": "BaseBdev3", 00:15:46.450 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:46.450 "is_configured": true, 00:15:46.450 "data_offset": 2048, 00:15:46.450 "data_size": 63488 00:15:46.450 } 00:15:46.450 ] 00:15:46.450 }' 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:46.450 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:46.710 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:46.710 05:53:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:47.649 05:53:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.649 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:47.649 "name": "raid_bdev1", 00:15:47.649 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:47.649 "strip_size_kb": 64, 00:15:47.649 "state": "online", 00:15:47.649 "raid_level": "raid5f", 00:15:47.649 "superblock": true, 00:15:47.649 "num_base_bdevs": 3, 00:15:47.649 "num_base_bdevs_discovered": 3, 00:15:47.649 "num_base_bdevs_operational": 3, 00:15:47.649 "process": { 00:15:47.649 "type": "rebuild", 00:15:47.649 "target": "spare", 00:15:47.649 "progress": { 00:15:47.649 "blocks": 92160, 00:15:47.649 "percent": 72 00:15:47.649 } 00:15:47.649 }, 00:15:47.649 "base_bdevs_list": [ 00:15:47.649 { 00:15:47.649 "name": "spare", 00:15:47.649 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:47.649 "is_configured": true, 00:15:47.649 "data_offset": 2048, 00:15:47.649 "data_size": 63488 00:15:47.649 }, 00:15:47.649 { 00:15:47.649 "name": "BaseBdev2", 00:15:47.649 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:47.649 "is_configured": true, 00:15:47.649 "data_offset": 2048, 00:15:47.649 "data_size": 63488 00:15:47.649 }, 00:15:47.649 { 00:15:47.649 "name": "BaseBdev3", 00:15:47.649 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:47.649 "is_configured": true, 00:15:47.649 "data_offset": 2048, 00:15:47.649 "data_size": 63488 00:15:47.649 } 00:15:47.649 ] 00:15:47.650 }' 00:15:47.650 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:47.650 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:47.650 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:47.650 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:47.650 05:53:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.030 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.030 "name": "raid_bdev1", 00:15:49.030 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:49.030 "strip_size_kb": 64, 00:15:49.030 "state": "online", 00:15:49.030 "raid_level": "raid5f", 00:15:49.030 "superblock": true, 00:15:49.030 "num_base_bdevs": 3, 00:15:49.030 "num_base_bdevs_discovered": 3, 00:15:49.030 "num_base_bdevs_operational": 3, 00:15:49.030 "process": { 00:15:49.030 "type": "rebuild", 00:15:49.030 "target": "spare", 00:15:49.030 "progress": { 00:15:49.030 "blocks": 114688, 00:15:49.030 "percent": 90 00:15:49.030 } 00:15:49.030 }, 00:15:49.030 "base_bdevs_list": [ 00:15:49.030 { 00:15:49.030 "name": "spare", 00:15:49.030 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:49.030 "is_configured": true, 00:15:49.030 "data_offset": 2048, 00:15:49.030 "data_size": 63488 00:15:49.030 }, 00:15:49.030 { 00:15:49.030 "name": "BaseBdev2", 00:15:49.030 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:49.030 "is_configured": true, 00:15:49.030 "data_offset": 2048, 00:15:49.030 "data_size": 63488 00:15:49.030 }, 00:15:49.030 { 00:15:49.030 "name": "BaseBdev3", 00:15:49.030 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:49.030 "is_configured": true, 00:15:49.030 "data_offset": 2048, 00:15:49.030 "data_size": 63488 00:15:49.030 } 00:15:49.030 ] 00:15:49.030 }' 00:15:49.031 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:49.031 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:49.031 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:49.031 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:49.031 05:53:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:49.290 [2024-12-12 05:53:56.682454] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:49.290 [2024-12-12 05:53:56.682565] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:49.290 [2024-12-12 05:53:56.682669] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:49.860 "name": "raid_bdev1", 00:15:49.860 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:49.860 "strip_size_kb": 64, 00:15:49.860 "state": "online", 00:15:49.860 "raid_level": "raid5f", 00:15:49.860 "superblock": true, 00:15:49.860 "num_base_bdevs": 3, 00:15:49.860 "num_base_bdevs_discovered": 3, 00:15:49.860 "num_base_bdevs_operational": 3, 00:15:49.860 "base_bdevs_list": [ 00:15:49.860 { 00:15:49.860 "name": "spare", 00:15:49.860 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 2048, 00:15:49.860 "data_size": 63488 00:15:49.860 }, 00:15:49.860 { 00:15:49.860 "name": "BaseBdev2", 00:15:49.860 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 2048, 00:15:49.860 "data_size": 63488 00:15:49.860 }, 00:15:49.860 { 00:15:49.860 "name": "BaseBdev3", 00:15:49.860 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:49.860 "is_configured": true, 00:15:49.860 "data_offset": 2048, 00:15:49.860 "data_size": 63488 00:15:49.860 } 00:15:49.860 ] 00:15:49.860 }' 00:15:49.860 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:50.121 "name": "raid_bdev1", 00:15:50.121 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:50.121 "strip_size_kb": 64, 00:15:50.121 "state": "online", 00:15:50.121 "raid_level": "raid5f", 00:15:50.121 "superblock": true, 00:15:50.121 "num_base_bdevs": 3, 00:15:50.121 "num_base_bdevs_discovered": 3, 00:15:50.121 "num_base_bdevs_operational": 3, 00:15:50.121 "base_bdevs_list": [ 00:15:50.121 { 00:15:50.121 "name": "spare", 00:15:50.121 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:50.121 "is_configured": true, 00:15:50.121 "data_offset": 2048, 00:15:50.121 "data_size": 63488 00:15:50.121 }, 00:15:50.121 { 00:15:50.121 "name": "BaseBdev2", 00:15:50.121 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:50.121 "is_configured": true, 00:15:50.121 "data_offset": 2048, 00:15:50.121 "data_size": 63488 00:15:50.121 }, 00:15:50.121 { 00:15:50.121 "name": "BaseBdev3", 00:15:50.121 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:50.121 "is_configured": true, 00:15:50.121 "data_offset": 2048, 00:15:50.121 "data_size": 63488 00:15:50.121 } 00:15:50.121 ] 00:15:50.121 }' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.121 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.381 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.381 "name": "raid_bdev1", 00:15:50.381 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:50.381 "strip_size_kb": 64, 00:15:50.381 "state": "online", 00:15:50.381 "raid_level": "raid5f", 00:15:50.381 "superblock": true, 00:15:50.381 "num_base_bdevs": 3, 00:15:50.381 "num_base_bdevs_discovered": 3, 00:15:50.381 "num_base_bdevs_operational": 3, 00:15:50.381 "base_bdevs_list": [ 00:15:50.381 { 00:15:50.381 "name": "spare", 00:15:50.381 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:50.381 "is_configured": true, 00:15:50.381 "data_offset": 2048, 00:15:50.381 "data_size": 63488 00:15:50.381 }, 00:15:50.381 { 00:15:50.381 "name": "BaseBdev2", 00:15:50.381 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:50.381 "is_configured": true, 00:15:50.381 "data_offset": 2048, 00:15:50.381 "data_size": 63488 00:15:50.381 }, 00:15:50.381 { 00:15:50.381 "name": "BaseBdev3", 00:15:50.381 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:50.381 "is_configured": true, 00:15:50.381 "data_offset": 2048, 00:15:50.381 "data_size": 63488 00:15:50.381 } 00:15:50.381 ] 00:15:50.381 }' 00:15:50.381 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.381 05:53:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.641 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:50.641 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.641 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.642 [2024-12-12 05:53:58.050514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:50.642 [2024-12-12 05:53:58.050547] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:50.642 [2024-12-12 05:53:58.050632] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.642 [2024-12-12 05:53:58.050718] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.642 [2024-12-12 05:53:58.050752] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.642 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:50.902 /dev/nbd0 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:50.902 1+0 records in 00:15:50.902 1+0 records out 00:15:50.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000441456 s, 9.3 MB/s 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:50.902 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:51.162 /dev/nbd1 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:51.162 1+0 records in 00:15:51.162 1+0 records out 00:15:51.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379381 s, 10.8 MB/s 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:51.162 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:51.422 05:53:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.682 [2024-12-12 05:53:59.172727] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:51.682 [2024-12-12 05:53:59.172805] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:51.682 [2024-12-12 05:53:59.172825] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:51.682 [2024-12-12 05:53:59.172836] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:51.682 [2024-12-12 05:53:59.175115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:51.682 [2024-12-12 05:53:59.175161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:51.682 [2024-12-12 05:53:59.175254] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:51.682 [2024-12-12 05:53:59.175314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:51.682 [2024-12-12 05:53:59.175464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.682 [2024-12-12 05:53:59.175638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.682 spare 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.682 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.942 [2024-12-12 05:53:59.275538] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:51.942 [2024-12-12 05:53:59.275571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:51.942 [2024-12-12 05:53:59.275822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:15:51.942 [2024-12-12 05:53:59.280952] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:51.942 [2024-12-12 05:53:59.280978] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:51.942 [2024-12-12 05:53:59.281186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.942 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.942 "name": "raid_bdev1", 00:15:51.942 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:51.942 "strip_size_kb": 64, 00:15:51.942 "state": "online", 00:15:51.942 "raid_level": "raid5f", 00:15:51.942 "superblock": true, 00:15:51.943 "num_base_bdevs": 3, 00:15:51.943 "num_base_bdevs_discovered": 3, 00:15:51.943 "num_base_bdevs_operational": 3, 00:15:51.943 "base_bdevs_list": [ 00:15:51.943 { 00:15:51.943 "name": "spare", 00:15:51.943 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:51.943 "is_configured": true, 00:15:51.943 "data_offset": 2048, 00:15:51.943 "data_size": 63488 00:15:51.943 }, 00:15:51.943 { 00:15:51.943 "name": "BaseBdev2", 00:15:51.943 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:51.943 "is_configured": true, 00:15:51.943 "data_offset": 2048, 00:15:51.943 "data_size": 63488 00:15:51.943 }, 00:15:51.943 { 00:15:51.943 "name": "BaseBdev3", 00:15:51.943 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:51.943 "is_configured": true, 00:15:51.943 "data_offset": 2048, 00:15:51.943 "data_size": 63488 00:15:51.943 } 00:15:51.943 ] 00:15:51.943 }' 00:15:51.943 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.943 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:52.512 "name": "raid_bdev1", 00:15:52.512 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:52.512 "strip_size_kb": 64, 00:15:52.512 "state": "online", 00:15:52.512 "raid_level": "raid5f", 00:15:52.512 "superblock": true, 00:15:52.512 "num_base_bdevs": 3, 00:15:52.512 "num_base_bdevs_discovered": 3, 00:15:52.512 "num_base_bdevs_operational": 3, 00:15:52.512 "base_bdevs_list": [ 00:15:52.512 { 00:15:52.512 "name": "spare", 00:15:52.512 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:52.512 "is_configured": true, 00:15:52.512 "data_offset": 2048, 00:15:52.512 "data_size": 63488 00:15:52.512 }, 00:15:52.512 { 00:15:52.512 "name": "BaseBdev2", 00:15:52.512 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:52.512 "is_configured": true, 00:15:52.512 "data_offset": 2048, 00:15:52.512 "data_size": 63488 00:15:52.512 }, 00:15:52.512 { 00:15:52.512 "name": "BaseBdev3", 00:15:52.512 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:52.512 "is_configured": true, 00:15:52.512 "data_offset": 2048, 00:15:52.512 "data_size": 63488 00:15:52.512 } 00:15:52.512 ] 00:15:52.512 }' 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.512 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.513 [2024-12-12 05:53:59.922585] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.513 "name": "raid_bdev1", 00:15:52.513 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:52.513 "strip_size_kb": 64, 00:15:52.513 "state": "online", 00:15:52.513 "raid_level": "raid5f", 00:15:52.513 "superblock": true, 00:15:52.513 "num_base_bdevs": 3, 00:15:52.513 "num_base_bdevs_discovered": 2, 00:15:52.513 "num_base_bdevs_operational": 2, 00:15:52.513 "base_bdevs_list": [ 00:15:52.513 { 00:15:52.513 "name": null, 00:15:52.513 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.513 "is_configured": false, 00:15:52.513 "data_offset": 0, 00:15:52.513 "data_size": 63488 00:15:52.513 }, 00:15:52.513 { 00:15:52.513 "name": "BaseBdev2", 00:15:52.513 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:52.513 "is_configured": true, 00:15:52.513 "data_offset": 2048, 00:15:52.513 "data_size": 63488 00:15:52.513 }, 00:15:52.513 { 00:15:52.513 "name": "BaseBdev3", 00:15:52.513 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:52.513 "is_configured": true, 00:15:52.513 "data_offset": 2048, 00:15:52.513 "data_size": 63488 00:15:52.513 } 00:15:52.513 ] 00:15:52.513 }' 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.513 05:53:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 05:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:53.086 05:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.086 05:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.086 [2024-12-12 05:54:00.413816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.086 [2024-12-12 05:54:00.414022] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:53.086 [2024-12-12 05:54:00.414043] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:53.086 [2024-12-12 05:54:00.414082] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:53.086 [2024-12-12 05:54:00.429605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:15:53.086 05:54:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.086 05:54:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:53.086 [2024-12-12 05:54:00.436723] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.024 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:54.024 "name": "raid_bdev1", 00:15:54.024 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:54.024 "strip_size_kb": 64, 00:15:54.024 "state": "online", 00:15:54.024 "raid_level": "raid5f", 00:15:54.024 "superblock": true, 00:15:54.024 "num_base_bdevs": 3, 00:15:54.024 "num_base_bdevs_discovered": 3, 00:15:54.024 "num_base_bdevs_operational": 3, 00:15:54.024 "process": { 00:15:54.024 "type": "rebuild", 00:15:54.024 "target": "spare", 00:15:54.024 "progress": { 00:15:54.024 "blocks": 20480, 00:15:54.024 "percent": 16 00:15:54.024 } 00:15:54.024 }, 00:15:54.024 "base_bdevs_list": [ 00:15:54.024 { 00:15:54.024 "name": "spare", 00:15:54.024 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:54.024 "is_configured": true, 00:15:54.024 "data_offset": 2048, 00:15:54.024 "data_size": 63488 00:15:54.024 }, 00:15:54.024 { 00:15:54.024 "name": "BaseBdev2", 00:15:54.024 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:54.024 "is_configured": true, 00:15:54.024 "data_offset": 2048, 00:15:54.024 "data_size": 63488 00:15:54.024 }, 00:15:54.024 { 00:15:54.025 "name": "BaseBdev3", 00:15:54.025 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:54.025 "is_configured": true, 00:15:54.025 "data_offset": 2048, 00:15:54.025 "data_size": 63488 00:15:54.025 } 00:15:54.025 ] 00:15:54.025 }' 00:15:54.025 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:54.025 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:54.025 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.284 [2024-12-12 05:54:01.579830] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.284 [2024-12-12 05:54:01.644667] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:54.284 [2024-12-12 05:54:01.644728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.284 [2024-12-12 05:54:01.644759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:54.284 [2024-12-12 05:54:01.644768] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.284 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.284 "name": "raid_bdev1", 00:15:54.284 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:54.284 "strip_size_kb": 64, 00:15:54.284 "state": "online", 00:15:54.284 "raid_level": "raid5f", 00:15:54.284 "superblock": true, 00:15:54.284 "num_base_bdevs": 3, 00:15:54.284 "num_base_bdevs_discovered": 2, 00:15:54.284 "num_base_bdevs_operational": 2, 00:15:54.284 "base_bdevs_list": [ 00:15:54.284 { 00:15:54.284 "name": null, 00:15:54.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.284 "is_configured": false, 00:15:54.284 "data_offset": 0, 00:15:54.284 "data_size": 63488 00:15:54.284 }, 00:15:54.284 { 00:15:54.284 "name": "BaseBdev2", 00:15:54.284 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:54.284 "is_configured": true, 00:15:54.284 "data_offset": 2048, 00:15:54.284 "data_size": 63488 00:15:54.284 }, 00:15:54.284 { 00:15:54.284 "name": "BaseBdev3", 00:15:54.284 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:54.284 "is_configured": true, 00:15:54.284 "data_offset": 2048, 00:15:54.284 "data_size": 63488 00:15:54.285 } 00:15:54.285 ] 00:15:54.285 }' 00:15:54.285 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.285 05:54:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.854 05:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:54.854 05:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.854 05:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.854 [2024-12-12 05:54:02.105086] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:54.854 [2024-12-12 05:54:02.105175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:54.854 [2024-12-12 05:54:02.105197] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:15:54.854 [2024-12-12 05:54:02.105210] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:54.854 [2024-12-12 05:54:02.105721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:54.854 [2024-12-12 05:54:02.105756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:54.854 [2024-12-12 05:54:02.105859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:54.854 [2024-12-12 05:54:02.105895] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:54.854 [2024-12-12 05:54:02.105905] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:54.854 [2024-12-12 05:54:02.105934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:54.854 [2024-12-12 05:54:02.121126] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:15:54.854 spare 00:15:54.854 05:54:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.854 05:54:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:54.854 [2024-12-12 05:54:02.128462] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:55.794 "name": "raid_bdev1", 00:15:55.794 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:55.794 "strip_size_kb": 64, 00:15:55.794 "state": "online", 00:15:55.794 "raid_level": "raid5f", 00:15:55.794 "superblock": true, 00:15:55.794 "num_base_bdevs": 3, 00:15:55.794 "num_base_bdevs_discovered": 3, 00:15:55.794 "num_base_bdevs_operational": 3, 00:15:55.794 "process": { 00:15:55.794 "type": "rebuild", 00:15:55.794 "target": "spare", 00:15:55.794 "progress": { 00:15:55.794 "blocks": 20480, 00:15:55.794 "percent": 16 00:15:55.794 } 00:15:55.794 }, 00:15:55.794 "base_bdevs_list": [ 00:15:55.794 { 00:15:55.794 "name": "spare", 00:15:55.794 "uuid": "60f06098-83f0-5c2d-b41a-7b905353dfe7", 00:15:55.794 "is_configured": true, 00:15:55.794 "data_offset": 2048, 00:15:55.794 "data_size": 63488 00:15:55.794 }, 00:15:55.794 { 00:15:55.794 "name": "BaseBdev2", 00:15:55.794 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:55.794 "is_configured": true, 00:15:55.794 "data_offset": 2048, 00:15:55.794 "data_size": 63488 00:15:55.794 }, 00:15:55.794 { 00:15:55.794 "name": "BaseBdev3", 00:15:55.794 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:55.794 "is_configured": true, 00:15:55.794 "data_offset": 2048, 00:15:55.794 "data_size": 63488 00:15:55.794 } 00:15:55.794 ] 00:15:55.794 }' 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.794 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.794 [2024-12-12 05:54:03.279396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.054 [2024-12-12 05:54:03.336281] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:56.054 [2024-12-12 05:54:03.336338] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:56.054 [2024-12-12 05:54:03.336371] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:56.054 [2024-12-12 05:54:03.336378] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.054 "name": "raid_bdev1", 00:15:56.054 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:56.054 "strip_size_kb": 64, 00:15:56.054 "state": "online", 00:15:56.054 "raid_level": "raid5f", 00:15:56.054 "superblock": true, 00:15:56.054 "num_base_bdevs": 3, 00:15:56.054 "num_base_bdevs_discovered": 2, 00:15:56.054 "num_base_bdevs_operational": 2, 00:15:56.054 "base_bdevs_list": [ 00:15:56.054 { 00:15:56.054 "name": null, 00:15:56.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.054 "is_configured": false, 00:15:56.054 "data_offset": 0, 00:15:56.054 "data_size": 63488 00:15:56.054 }, 00:15:56.054 { 00:15:56.054 "name": "BaseBdev2", 00:15:56.054 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:56.054 "is_configured": true, 00:15:56.054 "data_offset": 2048, 00:15:56.054 "data_size": 63488 00:15:56.054 }, 00:15:56.054 { 00:15:56.054 "name": "BaseBdev3", 00:15:56.054 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:56.054 "is_configured": true, 00:15:56.054 "data_offset": 2048, 00:15:56.054 "data_size": 63488 00:15:56.054 } 00:15:56.054 ] 00:15:56.054 }' 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.054 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.314 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:56.574 "name": "raid_bdev1", 00:15:56.574 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:56.574 "strip_size_kb": 64, 00:15:56.574 "state": "online", 00:15:56.574 "raid_level": "raid5f", 00:15:56.574 "superblock": true, 00:15:56.574 "num_base_bdevs": 3, 00:15:56.574 "num_base_bdevs_discovered": 2, 00:15:56.574 "num_base_bdevs_operational": 2, 00:15:56.574 "base_bdevs_list": [ 00:15:56.574 { 00:15:56.574 "name": null, 00:15:56.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:56.574 "is_configured": false, 00:15:56.574 "data_offset": 0, 00:15:56.574 "data_size": 63488 00:15:56.574 }, 00:15:56.574 { 00:15:56.574 "name": "BaseBdev2", 00:15:56.574 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:56.574 "is_configured": true, 00:15:56.574 "data_offset": 2048, 00:15:56.574 "data_size": 63488 00:15:56.574 }, 00:15:56.574 { 00:15:56.574 "name": "BaseBdev3", 00:15:56.574 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:56.574 "is_configured": true, 00:15:56.574 "data_offset": 2048, 00:15:56.574 "data_size": 63488 00:15:56.574 } 00:15:56.574 ] 00:15:56.574 }' 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.574 [2024-12-12 05:54:03.980832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:56.574 [2024-12-12 05:54:03.980890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:56.574 [2024-12-12 05:54:03.980916] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:56.574 [2024-12-12 05:54:03.980925] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:56.574 [2024-12-12 05:54:03.981360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:56.574 [2024-12-12 05:54:03.981376] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:56.574 [2024-12-12 05:54:03.981454] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:56.574 [2024-12-12 05:54:03.981470] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:56.574 [2024-12-12 05:54:03.981488] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:56.574 [2024-12-12 05:54:03.981511] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:56.574 BaseBdev1 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.574 05:54:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.511 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.512 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.512 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:57.512 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.512 05:54:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.512 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.771 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.771 "name": "raid_bdev1", 00:15:57.771 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:57.771 "strip_size_kb": 64, 00:15:57.771 "state": "online", 00:15:57.771 "raid_level": "raid5f", 00:15:57.771 "superblock": true, 00:15:57.771 "num_base_bdevs": 3, 00:15:57.771 "num_base_bdevs_discovered": 2, 00:15:57.771 "num_base_bdevs_operational": 2, 00:15:57.771 "base_bdevs_list": [ 00:15:57.771 { 00:15:57.771 "name": null, 00:15:57.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:57.771 "is_configured": false, 00:15:57.771 "data_offset": 0, 00:15:57.771 "data_size": 63488 00:15:57.771 }, 00:15:57.771 { 00:15:57.771 "name": "BaseBdev2", 00:15:57.771 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:57.771 "is_configured": true, 00:15:57.771 "data_offset": 2048, 00:15:57.771 "data_size": 63488 00:15:57.771 }, 00:15:57.771 { 00:15:57.771 "name": "BaseBdev3", 00:15:57.771 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:57.771 "is_configured": true, 00:15:57.772 "data_offset": 2048, 00:15:57.772 "data_size": 63488 00:15:57.772 } 00:15:57.772 ] 00:15:57.772 }' 00:15:57.772 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.772 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:58.031 "name": "raid_bdev1", 00:15:58.031 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:58.031 "strip_size_kb": 64, 00:15:58.031 "state": "online", 00:15:58.031 "raid_level": "raid5f", 00:15:58.031 "superblock": true, 00:15:58.031 "num_base_bdevs": 3, 00:15:58.031 "num_base_bdevs_discovered": 2, 00:15:58.031 "num_base_bdevs_operational": 2, 00:15:58.031 "base_bdevs_list": [ 00:15:58.031 { 00:15:58.031 "name": null, 00:15:58.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:58.031 "is_configured": false, 00:15:58.031 "data_offset": 0, 00:15:58.031 "data_size": 63488 00:15:58.031 }, 00:15:58.031 { 00:15:58.031 "name": "BaseBdev2", 00:15:58.031 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:58.031 "is_configured": true, 00:15:58.031 "data_offset": 2048, 00:15:58.031 "data_size": 63488 00:15:58.031 }, 00:15:58.031 { 00:15:58.031 "name": "BaseBdev3", 00:15:58.031 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:58.031 "is_configured": true, 00:15:58.031 "data_offset": 2048, 00:15:58.031 "data_size": 63488 00:15:58.031 } 00:15:58.031 ] 00:15:58.031 }' 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.031 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.031 [2024-12-12 05:54:05.546352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:58.031 [2024-12-12 05:54:05.546587] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:58.031 [2024-12-12 05:54:05.546620] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:58.031 request: 00:15:58.031 { 00:15:58.031 "base_bdev": "BaseBdev1", 00:15:58.031 "raid_bdev": "raid_bdev1", 00:15:58.031 "method": "bdev_raid_add_base_bdev", 00:15:58.031 "req_id": 1 00:15:58.031 } 00:15:58.031 Got JSON-RPC error response 00:15:58.291 response: 00:15:58.291 { 00:15:58.291 "code": -22, 00:15:58.291 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:58.291 } 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:58.291 05:54:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.231 "name": "raid_bdev1", 00:15:59.231 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:59.231 "strip_size_kb": 64, 00:15:59.231 "state": "online", 00:15:59.231 "raid_level": "raid5f", 00:15:59.231 "superblock": true, 00:15:59.231 "num_base_bdevs": 3, 00:15:59.231 "num_base_bdevs_discovered": 2, 00:15:59.231 "num_base_bdevs_operational": 2, 00:15:59.231 "base_bdevs_list": [ 00:15:59.231 { 00:15:59.231 "name": null, 00:15:59.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.231 "is_configured": false, 00:15:59.231 "data_offset": 0, 00:15:59.231 "data_size": 63488 00:15:59.231 }, 00:15:59.231 { 00:15:59.231 "name": "BaseBdev2", 00:15:59.231 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:59.231 "is_configured": true, 00:15:59.231 "data_offset": 2048, 00:15:59.231 "data_size": 63488 00:15:59.231 }, 00:15:59.231 { 00:15:59.231 "name": "BaseBdev3", 00:15:59.231 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:59.231 "is_configured": true, 00:15:59.231 "data_offset": 2048, 00:15:59.231 "data_size": 63488 00:15:59.231 } 00:15:59.231 ] 00:15:59.231 }' 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.231 05:54:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.801 "name": "raid_bdev1", 00:15:59.801 "uuid": "b184d712-8464-4a01-8314-a05ee2499793", 00:15:59.801 "strip_size_kb": 64, 00:15:59.801 "state": "online", 00:15:59.801 "raid_level": "raid5f", 00:15:59.801 "superblock": true, 00:15:59.801 "num_base_bdevs": 3, 00:15:59.801 "num_base_bdevs_discovered": 2, 00:15:59.801 "num_base_bdevs_operational": 2, 00:15:59.801 "base_bdevs_list": [ 00:15:59.801 { 00:15:59.801 "name": null, 00:15:59.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.801 "is_configured": false, 00:15:59.801 "data_offset": 0, 00:15:59.801 "data_size": 63488 00:15:59.801 }, 00:15:59.801 { 00:15:59.801 "name": "BaseBdev2", 00:15:59.801 "uuid": "2e998844-1bbb-5d3f-95b3-aab2c1d88d37", 00:15:59.801 "is_configured": true, 00:15:59.801 "data_offset": 2048, 00:15:59.801 "data_size": 63488 00:15:59.801 }, 00:15:59.801 { 00:15:59.801 "name": "BaseBdev3", 00:15:59.801 "uuid": "97250098-5068-5f6e-83b0-fa8bb3cd91fc", 00:15:59.801 "is_configured": true, 00:15:59.801 "data_offset": 2048, 00:15:59.801 "data_size": 63488 00:15:59.801 } 00:15:59.801 ] 00:15:59.801 }' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82144 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82144 ']' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82144 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82144 00:15:59.801 killing process with pid 82144 00:15:59.801 Received shutdown signal, test time was about 60.000000 seconds 00:15:59.801 00:15:59.801 Latency(us) 00:15:59.801 [2024-12-12T05:54:07.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.801 [2024-12-12T05:54:07.323Z] =================================================================================================================== 00:15:59.801 [2024-12-12T05:54:07.323Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82144' 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82144 00:15:59.801 [2024-12-12 05:54:07.178356] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:59.801 [2024-12-12 05:54:07.178485] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:59.801 [2024-12-12 05:54:07.178559] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:59.801 05:54:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82144 00:15:59.801 [2024-12-12 05:54:07.178573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:00.061 [2024-12-12 05:54:07.553453] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:01.442 05:54:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:01.442 00:16:01.442 real 0m22.958s 00:16:01.442 user 0m29.403s 00:16:01.442 sys 0m2.677s 00:16:01.442 05:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.442 05:54:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.442 ************************************ 00:16:01.442 END TEST raid5f_rebuild_test_sb 00:16:01.442 ************************************ 00:16:01.442 05:54:08 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:01.442 05:54:08 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:01.442 05:54:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:01.442 05:54:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.442 05:54:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:01.442 ************************************ 00:16:01.442 START TEST raid5f_state_function_test 00:16:01.442 ************************************ 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:01.442 Process raid pid: 82759 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=82759 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 82759' 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 82759 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 82759 ']' 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.442 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.442 05:54:08 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.442 [2024-12-12 05:54:08.747799] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:16:01.442 [2024-12-12 05:54:08.747936] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.442 [2024-12-12 05:54:08.920170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.703 [2024-12-12 05:54:09.020364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.703 [2024-12-12 05:54:09.212696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.703 [2024-12-12 05:54:09.212730] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.273 [2024-12-12 05:54:09.562903] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.273 [2024-12-12 05:54:09.562955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.273 [2024-12-12 05:54:09.562970] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.273 [2024-12-12 05:54:09.562979] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.273 [2024-12-12 05:54:09.562985] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.273 [2024-12-12 05:54:09.562994] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.273 [2024-12-12 05:54:09.563000] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.273 [2024-12-12 05:54:09.563008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.273 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.273 "name": "Existed_Raid", 00:16:02.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.273 "strip_size_kb": 64, 00:16:02.273 "state": "configuring", 00:16:02.273 "raid_level": "raid5f", 00:16:02.273 "superblock": false, 00:16:02.273 "num_base_bdevs": 4, 00:16:02.273 "num_base_bdevs_discovered": 0, 00:16:02.273 "num_base_bdevs_operational": 4, 00:16:02.273 "base_bdevs_list": [ 00:16:02.273 { 00:16:02.273 "name": "BaseBdev1", 00:16:02.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.273 "is_configured": false, 00:16:02.273 "data_offset": 0, 00:16:02.273 "data_size": 0 00:16:02.273 }, 00:16:02.273 { 00:16:02.273 "name": "BaseBdev2", 00:16:02.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.273 "is_configured": false, 00:16:02.273 "data_offset": 0, 00:16:02.273 "data_size": 0 00:16:02.273 }, 00:16:02.273 { 00:16:02.273 "name": "BaseBdev3", 00:16:02.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.273 "is_configured": false, 00:16:02.273 "data_offset": 0, 00:16:02.273 "data_size": 0 00:16:02.273 }, 00:16:02.273 { 00:16:02.274 "name": "BaseBdev4", 00:16:02.274 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.274 "is_configured": false, 00:16:02.274 "data_offset": 0, 00:16:02.274 "data_size": 0 00:16:02.274 } 00:16:02.274 ] 00:16:02.274 }' 00:16:02.274 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.274 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.533 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:02.533 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.534 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.534 [2024-12-12 05:54:09.994177] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:02.534 [2024-12-12 05:54:09.994262] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:02.534 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.534 05:54:09 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:02.534 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.534 05:54:09 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.534 [2024-12-12 05:54:10.006170] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:02.534 [2024-12-12 05:54:10.006252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:02.534 [2024-12-12 05:54:10.006294] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.534 [2024-12-12 05:54:10.006331] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.534 [2024-12-12 05:54:10.006360] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.534 [2024-12-12 05:54:10.006383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.534 [2024-12-12 05:54:10.006421] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.534 [2024-12-12 05:54:10.006443] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.534 [2024-12-12 05:54:10.052001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.534 BaseBdev1 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:02.534 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.794 [ 00:16:02.794 { 00:16:02.794 "name": "BaseBdev1", 00:16:02.794 "aliases": [ 00:16:02.794 "7762a92a-3b10-4a1e-8004-5333cf1f0a72" 00:16:02.794 ], 00:16:02.794 "product_name": "Malloc disk", 00:16:02.794 "block_size": 512, 00:16:02.794 "num_blocks": 65536, 00:16:02.794 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:02.794 "assigned_rate_limits": { 00:16:02.794 "rw_ios_per_sec": 0, 00:16:02.794 "rw_mbytes_per_sec": 0, 00:16:02.794 "r_mbytes_per_sec": 0, 00:16:02.794 "w_mbytes_per_sec": 0 00:16:02.794 }, 00:16:02.794 "claimed": true, 00:16:02.794 "claim_type": "exclusive_write", 00:16:02.794 "zoned": false, 00:16:02.794 "supported_io_types": { 00:16:02.794 "read": true, 00:16:02.794 "write": true, 00:16:02.794 "unmap": true, 00:16:02.794 "flush": true, 00:16:02.794 "reset": true, 00:16:02.794 "nvme_admin": false, 00:16:02.794 "nvme_io": false, 00:16:02.794 "nvme_io_md": false, 00:16:02.794 "write_zeroes": true, 00:16:02.794 "zcopy": true, 00:16:02.794 "get_zone_info": false, 00:16:02.794 "zone_management": false, 00:16:02.794 "zone_append": false, 00:16:02.794 "compare": false, 00:16:02.794 "compare_and_write": false, 00:16:02.794 "abort": true, 00:16:02.794 "seek_hole": false, 00:16:02.794 "seek_data": false, 00:16:02.794 "copy": true, 00:16:02.794 "nvme_iov_md": false 00:16:02.794 }, 00:16:02.794 "memory_domains": [ 00:16:02.794 { 00:16:02.794 "dma_device_id": "system", 00:16:02.794 "dma_device_type": 1 00:16:02.794 }, 00:16:02.794 { 00:16:02.794 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.794 "dma_device_type": 2 00:16:02.794 } 00:16:02.794 ], 00:16:02.794 "driver_specific": {} 00:16:02.794 } 00:16:02.794 ] 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:02.794 "name": "Existed_Raid", 00:16:02.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.794 "strip_size_kb": 64, 00:16:02.794 "state": "configuring", 00:16:02.794 "raid_level": "raid5f", 00:16:02.794 "superblock": false, 00:16:02.794 "num_base_bdevs": 4, 00:16:02.794 "num_base_bdevs_discovered": 1, 00:16:02.794 "num_base_bdevs_operational": 4, 00:16:02.794 "base_bdevs_list": [ 00:16:02.794 { 00:16:02.794 "name": "BaseBdev1", 00:16:02.794 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:02.794 "is_configured": true, 00:16:02.794 "data_offset": 0, 00:16:02.794 "data_size": 65536 00:16:02.794 }, 00:16:02.794 { 00:16:02.794 "name": "BaseBdev2", 00:16:02.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.794 "is_configured": false, 00:16:02.794 "data_offset": 0, 00:16:02.794 "data_size": 0 00:16:02.794 }, 00:16:02.794 { 00:16:02.794 "name": "BaseBdev3", 00:16:02.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.794 "is_configured": false, 00:16:02.794 "data_offset": 0, 00:16:02.794 "data_size": 0 00:16:02.794 }, 00:16:02.794 { 00:16:02.794 "name": "BaseBdev4", 00:16:02.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.794 "is_configured": false, 00:16:02.794 "data_offset": 0, 00:16:02.794 "data_size": 0 00:16:02.794 } 00:16:02.794 ] 00:16:02.794 }' 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:02.794 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.054 [2024-12-12 05:54:10.531243] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:03.054 [2024-12-12 05:54:10.531284] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.054 [2024-12-12 05:54:10.539287] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:03.054 [2024-12-12 05:54:10.540966] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:03.054 [2024-12-12 05:54:10.541004] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:03.054 [2024-12-12 05:54:10.541012] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:03.054 [2024-12-12 05:54:10.541038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:03.054 [2024-12-12 05:54:10.541044] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:03.054 [2024-12-12 05:54:10.541052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.054 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.314 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.314 "name": "Existed_Raid", 00:16:03.314 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.314 "strip_size_kb": 64, 00:16:03.314 "state": "configuring", 00:16:03.314 "raid_level": "raid5f", 00:16:03.314 "superblock": false, 00:16:03.314 "num_base_bdevs": 4, 00:16:03.314 "num_base_bdevs_discovered": 1, 00:16:03.314 "num_base_bdevs_operational": 4, 00:16:03.314 "base_bdevs_list": [ 00:16:03.314 { 00:16:03.314 "name": "BaseBdev1", 00:16:03.314 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:03.314 "is_configured": true, 00:16:03.314 "data_offset": 0, 00:16:03.314 "data_size": 65536 00:16:03.314 }, 00:16:03.315 { 00:16:03.315 "name": "BaseBdev2", 00:16:03.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.315 "is_configured": false, 00:16:03.315 "data_offset": 0, 00:16:03.315 "data_size": 0 00:16:03.315 }, 00:16:03.315 { 00:16:03.315 "name": "BaseBdev3", 00:16:03.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.315 "is_configured": false, 00:16:03.315 "data_offset": 0, 00:16:03.315 "data_size": 0 00:16:03.315 }, 00:16:03.315 { 00:16:03.315 "name": "BaseBdev4", 00:16:03.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.315 "is_configured": false, 00:16:03.315 "data_offset": 0, 00:16:03.315 "data_size": 0 00:16:03.315 } 00:16:03.315 ] 00:16:03.315 }' 00:16:03.315 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.315 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.574 [2024-12-12 05:54:10.979625] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.574 BaseBdev2 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.574 05:54:10 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.574 [ 00:16:03.574 { 00:16:03.574 "name": "BaseBdev2", 00:16:03.574 "aliases": [ 00:16:03.574 "e574c13f-e5b3-4414-84b0-ade2d449a090" 00:16:03.574 ], 00:16:03.574 "product_name": "Malloc disk", 00:16:03.574 "block_size": 512, 00:16:03.574 "num_blocks": 65536, 00:16:03.574 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:03.574 "assigned_rate_limits": { 00:16:03.574 "rw_ios_per_sec": 0, 00:16:03.574 "rw_mbytes_per_sec": 0, 00:16:03.574 "r_mbytes_per_sec": 0, 00:16:03.574 "w_mbytes_per_sec": 0 00:16:03.574 }, 00:16:03.574 "claimed": true, 00:16:03.574 "claim_type": "exclusive_write", 00:16:03.575 "zoned": false, 00:16:03.575 "supported_io_types": { 00:16:03.575 "read": true, 00:16:03.575 "write": true, 00:16:03.575 "unmap": true, 00:16:03.575 "flush": true, 00:16:03.575 "reset": true, 00:16:03.575 "nvme_admin": false, 00:16:03.575 "nvme_io": false, 00:16:03.575 "nvme_io_md": false, 00:16:03.575 "write_zeroes": true, 00:16:03.575 "zcopy": true, 00:16:03.575 "get_zone_info": false, 00:16:03.575 "zone_management": false, 00:16:03.575 "zone_append": false, 00:16:03.575 "compare": false, 00:16:03.575 "compare_and_write": false, 00:16:03.575 "abort": true, 00:16:03.575 "seek_hole": false, 00:16:03.575 "seek_data": false, 00:16:03.575 "copy": true, 00:16:03.575 "nvme_iov_md": false 00:16:03.575 }, 00:16:03.575 "memory_domains": [ 00:16:03.575 { 00:16:03.575 "dma_device_id": "system", 00:16:03.575 "dma_device_type": 1 00:16:03.575 }, 00:16:03.575 { 00:16:03.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.575 "dma_device_type": 2 00:16:03.575 } 00:16:03.575 ], 00:16:03.575 "driver_specific": {} 00:16:03.575 } 00:16:03.575 ] 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.575 "name": "Existed_Raid", 00:16:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.575 "strip_size_kb": 64, 00:16:03.575 "state": "configuring", 00:16:03.575 "raid_level": "raid5f", 00:16:03.575 "superblock": false, 00:16:03.575 "num_base_bdevs": 4, 00:16:03.575 "num_base_bdevs_discovered": 2, 00:16:03.575 "num_base_bdevs_operational": 4, 00:16:03.575 "base_bdevs_list": [ 00:16:03.575 { 00:16:03.575 "name": "BaseBdev1", 00:16:03.575 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:03.575 "is_configured": true, 00:16:03.575 "data_offset": 0, 00:16:03.575 "data_size": 65536 00:16:03.575 }, 00:16:03.575 { 00:16:03.575 "name": "BaseBdev2", 00:16:03.575 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:03.575 "is_configured": true, 00:16:03.575 "data_offset": 0, 00:16:03.575 "data_size": 65536 00:16:03.575 }, 00:16:03.575 { 00:16:03.575 "name": "BaseBdev3", 00:16:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.575 "is_configured": false, 00:16:03.575 "data_offset": 0, 00:16:03.575 "data_size": 0 00:16:03.575 }, 00:16:03.575 { 00:16:03.575 "name": "BaseBdev4", 00:16:03.575 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.575 "is_configured": false, 00:16:03.575 "data_offset": 0, 00:16:03.575 "data_size": 0 00:16:03.575 } 00:16:03.575 ] 00:16:03.575 }' 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.575 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 [2024-12-12 05:54:11.526204] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.147 BaseBdev3 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 [ 00:16:04.147 { 00:16:04.147 "name": "BaseBdev3", 00:16:04.147 "aliases": [ 00:16:04.147 "ebfb0fa8-f197-49dc-b61e-f0654d516b7f" 00:16:04.147 ], 00:16:04.147 "product_name": "Malloc disk", 00:16:04.147 "block_size": 512, 00:16:04.147 "num_blocks": 65536, 00:16:04.147 "uuid": "ebfb0fa8-f197-49dc-b61e-f0654d516b7f", 00:16:04.147 "assigned_rate_limits": { 00:16:04.147 "rw_ios_per_sec": 0, 00:16:04.147 "rw_mbytes_per_sec": 0, 00:16:04.147 "r_mbytes_per_sec": 0, 00:16:04.147 "w_mbytes_per_sec": 0 00:16:04.147 }, 00:16:04.147 "claimed": true, 00:16:04.147 "claim_type": "exclusive_write", 00:16:04.147 "zoned": false, 00:16:04.147 "supported_io_types": { 00:16:04.147 "read": true, 00:16:04.147 "write": true, 00:16:04.147 "unmap": true, 00:16:04.147 "flush": true, 00:16:04.147 "reset": true, 00:16:04.147 "nvme_admin": false, 00:16:04.147 "nvme_io": false, 00:16:04.147 "nvme_io_md": false, 00:16:04.147 "write_zeroes": true, 00:16:04.147 "zcopy": true, 00:16:04.147 "get_zone_info": false, 00:16:04.147 "zone_management": false, 00:16:04.147 "zone_append": false, 00:16:04.147 "compare": false, 00:16:04.147 "compare_and_write": false, 00:16:04.147 "abort": true, 00:16:04.147 "seek_hole": false, 00:16:04.147 "seek_data": false, 00:16:04.147 "copy": true, 00:16:04.147 "nvme_iov_md": false 00:16:04.147 }, 00:16:04.147 "memory_domains": [ 00:16:04.147 { 00:16:04.147 "dma_device_id": "system", 00:16:04.147 "dma_device_type": 1 00:16:04.147 }, 00:16:04.147 { 00:16:04.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.147 "dma_device_type": 2 00:16:04.147 } 00:16:04.147 ], 00:16:04.147 "driver_specific": {} 00:16:04.147 } 00:16:04.147 ] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.147 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.147 "name": "Existed_Raid", 00:16:04.147 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.147 "strip_size_kb": 64, 00:16:04.147 "state": "configuring", 00:16:04.147 "raid_level": "raid5f", 00:16:04.147 "superblock": false, 00:16:04.147 "num_base_bdevs": 4, 00:16:04.147 "num_base_bdevs_discovered": 3, 00:16:04.147 "num_base_bdevs_operational": 4, 00:16:04.147 "base_bdevs_list": [ 00:16:04.147 { 00:16:04.147 "name": "BaseBdev1", 00:16:04.147 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:04.147 "is_configured": true, 00:16:04.147 "data_offset": 0, 00:16:04.147 "data_size": 65536 00:16:04.147 }, 00:16:04.148 { 00:16:04.148 "name": "BaseBdev2", 00:16:04.148 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:04.148 "is_configured": true, 00:16:04.148 "data_offset": 0, 00:16:04.148 "data_size": 65536 00:16:04.148 }, 00:16:04.148 { 00:16:04.148 "name": "BaseBdev3", 00:16:04.148 "uuid": "ebfb0fa8-f197-49dc-b61e-f0654d516b7f", 00:16:04.148 "is_configured": true, 00:16:04.148 "data_offset": 0, 00:16:04.148 "data_size": 65536 00:16:04.148 }, 00:16:04.148 { 00:16:04.148 "name": "BaseBdev4", 00:16:04.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.148 "is_configured": false, 00:16:04.148 "data_offset": 0, 00:16:04.148 "data_size": 0 00:16:04.148 } 00:16:04.148 ] 00:16:04.148 }' 00:16:04.148 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.148 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.737 05:54:11 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:04.737 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.737 05:54:11 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.737 [2024-12-12 05:54:12.034769] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:04.737 [2024-12-12 05:54:12.034833] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:04.737 [2024-12-12 05:54:12.034843] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:04.737 [2024-12-12 05:54:12.035090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:04.737 [2024-12-12 05:54:12.041753] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:04.737 [2024-12-12 05:54:12.041821] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:04.737 [2024-12-12 05:54:12.042123] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.737 BaseBdev4 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.737 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.737 [ 00:16:04.737 { 00:16:04.737 "name": "BaseBdev4", 00:16:04.737 "aliases": [ 00:16:04.737 "18c02485-c1d4-4885-a4cb-19af8ba7c9a0" 00:16:04.737 ], 00:16:04.737 "product_name": "Malloc disk", 00:16:04.737 "block_size": 512, 00:16:04.737 "num_blocks": 65536, 00:16:04.737 "uuid": "18c02485-c1d4-4885-a4cb-19af8ba7c9a0", 00:16:04.737 "assigned_rate_limits": { 00:16:04.737 "rw_ios_per_sec": 0, 00:16:04.737 "rw_mbytes_per_sec": 0, 00:16:04.737 "r_mbytes_per_sec": 0, 00:16:04.737 "w_mbytes_per_sec": 0 00:16:04.737 }, 00:16:04.737 "claimed": true, 00:16:04.737 "claim_type": "exclusive_write", 00:16:04.737 "zoned": false, 00:16:04.737 "supported_io_types": { 00:16:04.737 "read": true, 00:16:04.737 "write": true, 00:16:04.737 "unmap": true, 00:16:04.737 "flush": true, 00:16:04.737 "reset": true, 00:16:04.737 "nvme_admin": false, 00:16:04.737 "nvme_io": false, 00:16:04.737 "nvme_io_md": false, 00:16:04.737 "write_zeroes": true, 00:16:04.737 "zcopy": true, 00:16:04.737 "get_zone_info": false, 00:16:04.737 "zone_management": false, 00:16:04.737 "zone_append": false, 00:16:04.737 "compare": false, 00:16:04.737 "compare_and_write": false, 00:16:04.737 "abort": true, 00:16:04.737 "seek_hole": false, 00:16:04.737 "seek_data": false, 00:16:04.737 "copy": true, 00:16:04.737 "nvme_iov_md": false 00:16:04.737 }, 00:16:04.737 "memory_domains": [ 00:16:04.737 { 00:16:04.737 "dma_device_id": "system", 00:16:04.737 "dma_device_type": 1 00:16:04.737 }, 00:16:04.737 { 00:16:04.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.738 "dma_device_type": 2 00:16:04.738 } 00:16:04.738 ], 00:16:04.738 "driver_specific": {} 00:16:04.738 } 00:16:04.738 ] 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.738 "name": "Existed_Raid", 00:16:04.738 "uuid": "fc0ca537-ae9d-4c2b-82f0-65ae1d46b343", 00:16:04.738 "strip_size_kb": 64, 00:16:04.738 "state": "online", 00:16:04.738 "raid_level": "raid5f", 00:16:04.738 "superblock": false, 00:16:04.738 "num_base_bdevs": 4, 00:16:04.738 "num_base_bdevs_discovered": 4, 00:16:04.738 "num_base_bdevs_operational": 4, 00:16:04.738 "base_bdevs_list": [ 00:16:04.738 { 00:16:04.738 "name": "BaseBdev1", 00:16:04.738 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:04.738 "is_configured": true, 00:16:04.738 "data_offset": 0, 00:16:04.738 "data_size": 65536 00:16:04.738 }, 00:16:04.738 { 00:16:04.738 "name": "BaseBdev2", 00:16:04.738 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:04.738 "is_configured": true, 00:16:04.738 "data_offset": 0, 00:16:04.738 "data_size": 65536 00:16:04.738 }, 00:16:04.738 { 00:16:04.738 "name": "BaseBdev3", 00:16:04.738 "uuid": "ebfb0fa8-f197-49dc-b61e-f0654d516b7f", 00:16:04.738 "is_configured": true, 00:16:04.738 "data_offset": 0, 00:16:04.738 "data_size": 65536 00:16:04.738 }, 00:16:04.738 { 00:16:04.738 "name": "BaseBdev4", 00:16:04.738 "uuid": "18c02485-c1d4-4885-a4cb-19af8ba7c9a0", 00:16:04.738 "is_configured": true, 00:16:04.738 "data_offset": 0, 00:16:04.738 "data_size": 65536 00:16:04.738 } 00:16:04.738 ] 00:16:04.738 }' 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.738 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.997 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.997 [2024-12-12 05:54:12.509405] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:05.256 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.256 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:05.256 "name": "Existed_Raid", 00:16:05.256 "aliases": [ 00:16:05.256 "fc0ca537-ae9d-4c2b-82f0-65ae1d46b343" 00:16:05.256 ], 00:16:05.256 "product_name": "Raid Volume", 00:16:05.256 "block_size": 512, 00:16:05.256 "num_blocks": 196608, 00:16:05.256 "uuid": "fc0ca537-ae9d-4c2b-82f0-65ae1d46b343", 00:16:05.256 "assigned_rate_limits": { 00:16:05.256 "rw_ios_per_sec": 0, 00:16:05.256 "rw_mbytes_per_sec": 0, 00:16:05.256 "r_mbytes_per_sec": 0, 00:16:05.256 "w_mbytes_per_sec": 0 00:16:05.256 }, 00:16:05.256 "claimed": false, 00:16:05.256 "zoned": false, 00:16:05.256 "supported_io_types": { 00:16:05.256 "read": true, 00:16:05.256 "write": true, 00:16:05.256 "unmap": false, 00:16:05.256 "flush": false, 00:16:05.256 "reset": true, 00:16:05.256 "nvme_admin": false, 00:16:05.256 "nvme_io": false, 00:16:05.256 "nvme_io_md": false, 00:16:05.256 "write_zeroes": true, 00:16:05.256 "zcopy": false, 00:16:05.256 "get_zone_info": false, 00:16:05.256 "zone_management": false, 00:16:05.256 "zone_append": false, 00:16:05.256 "compare": false, 00:16:05.256 "compare_and_write": false, 00:16:05.256 "abort": false, 00:16:05.256 "seek_hole": false, 00:16:05.256 "seek_data": false, 00:16:05.256 "copy": false, 00:16:05.256 "nvme_iov_md": false 00:16:05.257 }, 00:16:05.257 "driver_specific": { 00:16:05.257 "raid": { 00:16:05.257 "uuid": "fc0ca537-ae9d-4c2b-82f0-65ae1d46b343", 00:16:05.257 "strip_size_kb": 64, 00:16:05.257 "state": "online", 00:16:05.257 "raid_level": "raid5f", 00:16:05.257 "superblock": false, 00:16:05.257 "num_base_bdevs": 4, 00:16:05.257 "num_base_bdevs_discovered": 4, 00:16:05.257 "num_base_bdevs_operational": 4, 00:16:05.257 "base_bdevs_list": [ 00:16:05.257 { 00:16:05.257 "name": "BaseBdev1", 00:16:05.257 "uuid": "7762a92a-3b10-4a1e-8004-5333cf1f0a72", 00:16:05.257 "is_configured": true, 00:16:05.257 "data_offset": 0, 00:16:05.257 "data_size": 65536 00:16:05.257 }, 00:16:05.257 { 00:16:05.257 "name": "BaseBdev2", 00:16:05.257 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:05.257 "is_configured": true, 00:16:05.257 "data_offset": 0, 00:16:05.257 "data_size": 65536 00:16:05.257 }, 00:16:05.257 { 00:16:05.257 "name": "BaseBdev3", 00:16:05.257 "uuid": "ebfb0fa8-f197-49dc-b61e-f0654d516b7f", 00:16:05.257 "is_configured": true, 00:16:05.257 "data_offset": 0, 00:16:05.257 "data_size": 65536 00:16:05.257 }, 00:16:05.257 { 00:16:05.257 "name": "BaseBdev4", 00:16:05.257 "uuid": "18c02485-c1d4-4885-a4cb-19af8ba7c9a0", 00:16:05.257 "is_configured": true, 00:16:05.257 "data_offset": 0, 00:16:05.257 "data_size": 65536 00:16:05.257 } 00:16:05.257 ] 00:16:05.257 } 00:16:05.257 } 00:16:05.257 }' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:05.257 BaseBdev2 00:16:05.257 BaseBdev3 00:16:05.257 BaseBdev4' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.257 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.517 [2024-12-12 05:54:12.860654] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.517 05:54:12 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.517 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.517 "name": "Existed_Raid", 00:16:05.517 "uuid": "fc0ca537-ae9d-4c2b-82f0-65ae1d46b343", 00:16:05.517 "strip_size_kb": 64, 00:16:05.517 "state": "online", 00:16:05.517 "raid_level": "raid5f", 00:16:05.517 "superblock": false, 00:16:05.517 "num_base_bdevs": 4, 00:16:05.517 "num_base_bdevs_discovered": 3, 00:16:05.517 "num_base_bdevs_operational": 3, 00:16:05.517 "base_bdevs_list": [ 00:16:05.517 { 00:16:05.517 "name": null, 00:16:05.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.517 "is_configured": false, 00:16:05.517 "data_offset": 0, 00:16:05.517 "data_size": 65536 00:16:05.517 }, 00:16:05.517 { 00:16:05.517 "name": "BaseBdev2", 00:16:05.517 "uuid": "e574c13f-e5b3-4414-84b0-ade2d449a090", 00:16:05.517 "is_configured": true, 00:16:05.517 "data_offset": 0, 00:16:05.517 "data_size": 65536 00:16:05.517 }, 00:16:05.517 { 00:16:05.517 "name": "BaseBdev3", 00:16:05.517 "uuid": "ebfb0fa8-f197-49dc-b61e-f0654d516b7f", 00:16:05.518 "is_configured": true, 00:16:05.518 "data_offset": 0, 00:16:05.518 "data_size": 65536 00:16:05.518 }, 00:16:05.518 { 00:16:05.518 "name": "BaseBdev4", 00:16:05.518 "uuid": "18c02485-c1d4-4885-a4cb-19af8ba7c9a0", 00:16:05.518 "is_configured": true, 00:16:05.518 "data_offset": 0, 00:16:05.518 "data_size": 65536 00:16:05.518 } 00:16:05.518 ] 00:16:05.518 }' 00:16:05.518 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.518 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 [2024-12-12 05:54:13.408391] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:06.087 [2024-12-12 05:54:13.408558] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.087 [2024-12-12 05:54:13.497543] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.087 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.087 [2024-12-12 05:54:13.553452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.347 [2024-12-12 05:54:13.698918] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:06.347 [2024-12-12 05:54:13.699011] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:06.347 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.348 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.608 BaseBdev2 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.608 [ 00:16:06.608 { 00:16:06.608 "name": "BaseBdev2", 00:16:06.608 "aliases": [ 00:16:06.608 "0ea705eb-0fbc-4898-9803-8960b18b0ed1" 00:16:06.608 ], 00:16:06.608 "product_name": "Malloc disk", 00:16:06.608 "block_size": 512, 00:16:06.608 "num_blocks": 65536, 00:16:06.608 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:06.608 "assigned_rate_limits": { 00:16:06.608 "rw_ios_per_sec": 0, 00:16:06.608 "rw_mbytes_per_sec": 0, 00:16:06.608 "r_mbytes_per_sec": 0, 00:16:06.608 "w_mbytes_per_sec": 0 00:16:06.608 }, 00:16:06.608 "claimed": false, 00:16:06.608 "zoned": false, 00:16:06.608 "supported_io_types": { 00:16:06.608 "read": true, 00:16:06.608 "write": true, 00:16:06.608 "unmap": true, 00:16:06.608 "flush": true, 00:16:06.608 "reset": true, 00:16:06.608 "nvme_admin": false, 00:16:06.608 "nvme_io": false, 00:16:06.608 "nvme_io_md": false, 00:16:06.608 "write_zeroes": true, 00:16:06.608 "zcopy": true, 00:16:06.608 "get_zone_info": false, 00:16:06.608 "zone_management": false, 00:16:06.608 "zone_append": false, 00:16:06.608 "compare": false, 00:16:06.608 "compare_and_write": false, 00:16:06.608 "abort": true, 00:16:06.608 "seek_hole": false, 00:16:06.608 "seek_data": false, 00:16:06.608 "copy": true, 00:16:06.608 "nvme_iov_md": false 00:16:06.608 }, 00:16:06.608 "memory_domains": [ 00:16:06.608 { 00:16:06.608 "dma_device_id": "system", 00:16:06.608 "dma_device_type": 1 00:16:06.608 }, 00:16:06.608 { 00:16:06.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.608 "dma_device_type": 2 00:16:06.608 } 00:16:06.608 ], 00:16:06.608 "driver_specific": {} 00:16:06.608 } 00:16:06.608 ] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.608 BaseBdev3 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.608 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 [ 00:16:06.609 { 00:16:06.609 "name": "BaseBdev3", 00:16:06.609 "aliases": [ 00:16:06.609 "ee00add2-f3b2-4651-9736-2adb2c622074" 00:16:06.609 ], 00:16:06.609 "product_name": "Malloc disk", 00:16:06.609 "block_size": 512, 00:16:06.609 "num_blocks": 65536, 00:16:06.609 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:06.609 "assigned_rate_limits": { 00:16:06.609 "rw_ios_per_sec": 0, 00:16:06.609 "rw_mbytes_per_sec": 0, 00:16:06.609 "r_mbytes_per_sec": 0, 00:16:06.609 "w_mbytes_per_sec": 0 00:16:06.609 }, 00:16:06.609 "claimed": false, 00:16:06.609 "zoned": false, 00:16:06.609 "supported_io_types": { 00:16:06.609 "read": true, 00:16:06.609 "write": true, 00:16:06.609 "unmap": true, 00:16:06.609 "flush": true, 00:16:06.609 "reset": true, 00:16:06.609 "nvme_admin": false, 00:16:06.609 "nvme_io": false, 00:16:06.609 "nvme_io_md": false, 00:16:06.609 "write_zeroes": true, 00:16:06.609 "zcopy": true, 00:16:06.609 "get_zone_info": false, 00:16:06.609 "zone_management": false, 00:16:06.609 "zone_append": false, 00:16:06.609 "compare": false, 00:16:06.609 "compare_and_write": false, 00:16:06.609 "abort": true, 00:16:06.609 "seek_hole": false, 00:16:06.609 "seek_data": false, 00:16:06.609 "copy": true, 00:16:06.609 "nvme_iov_md": false 00:16:06.609 }, 00:16:06.609 "memory_domains": [ 00:16:06.609 { 00:16:06.609 "dma_device_id": "system", 00:16:06.609 "dma_device_type": 1 00:16:06.609 }, 00:16:06.609 { 00:16:06.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.609 "dma_device_type": 2 00:16:06.609 } 00:16:06.609 ], 00:16:06.609 "driver_specific": {} 00:16:06.609 } 00:16:06.609 ] 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:13 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 BaseBdev4 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 [ 00:16:06.609 { 00:16:06.609 "name": "BaseBdev4", 00:16:06.609 "aliases": [ 00:16:06.609 "dfdc4839-9fcd-4874-8222-9e1a7881d072" 00:16:06.609 ], 00:16:06.609 "product_name": "Malloc disk", 00:16:06.609 "block_size": 512, 00:16:06.609 "num_blocks": 65536, 00:16:06.609 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:06.609 "assigned_rate_limits": { 00:16:06.609 "rw_ios_per_sec": 0, 00:16:06.609 "rw_mbytes_per_sec": 0, 00:16:06.609 "r_mbytes_per_sec": 0, 00:16:06.609 "w_mbytes_per_sec": 0 00:16:06.609 }, 00:16:06.609 "claimed": false, 00:16:06.609 "zoned": false, 00:16:06.609 "supported_io_types": { 00:16:06.609 "read": true, 00:16:06.609 "write": true, 00:16:06.609 "unmap": true, 00:16:06.609 "flush": true, 00:16:06.609 "reset": true, 00:16:06.609 "nvme_admin": false, 00:16:06.609 "nvme_io": false, 00:16:06.609 "nvme_io_md": false, 00:16:06.609 "write_zeroes": true, 00:16:06.609 "zcopy": true, 00:16:06.609 "get_zone_info": false, 00:16:06.609 "zone_management": false, 00:16:06.609 "zone_append": false, 00:16:06.609 "compare": false, 00:16:06.609 "compare_and_write": false, 00:16:06.609 "abort": true, 00:16:06.609 "seek_hole": false, 00:16:06.609 "seek_data": false, 00:16:06.609 "copy": true, 00:16:06.609 "nvme_iov_md": false 00:16:06.609 }, 00:16:06.609 "memory_domains": [ 00:16:06.609 { 00:16:06.609 "dma_device_id": "system", 00:16:06.609 "dma_device_type": 1 00:16:06.609 }, 00:16:06.609 { 00:16:06.609 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:06.609 "dma_device_type": 2 00:16:06.609 } 00:16:06.609 ], 00:16:06.609 "driver_specific": {} 00:16:06.609 } 00:16:06.609 ] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 [2024-12-12 05:54:14.072161] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:06.609 [2024-12-12 05:54:14.072269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:06.609 [2024-12-12 05:54:14.072309] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:06.609 [2024-12-12 05:54:14.074026] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.609 [2024-12-12 05:54:14.074116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.609 "name": "Existed_Raid", 00:16:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.609 "strip_size_kb": 64, 00:16:06.609 "state": "configuring", 00:16:06.609 "raid_level": "raid5f", 00:16:06.609 "superblock": false, 00:16:06.609 "num_base_bdevs": 4, 00:16:06.609 "num_base_bdevs_discovered": 3, 00:16:06.609 "num_base_bdevs_operational": 4, 00:16:06.609 "base_bdevs_list": [ 00:16:06.609 { 00:16:06.609 "name": "BaseBdev1", 00:16:06.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.609 "is_configured": false, 00:16:06.609 "data_offset": 0, 00:16:06.609 "data_size": 0 00:16:06.609 }, 00:16:06.609 { 00:16:06.609 "name": "BaseBdev2", 00:16:06.609 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:06.609 "is_configured": true, 00:16:06.609 "data_offset": 0, 00:16:06.609 "data_size": 65536 00:16:06.609 }, 00:16:06.609 { 00:16:06.609 "name": "BaseBdev3", 00:16:06.609 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:06.609 "is_configured": true, 00:16:06.609 "data_offset": 0, 00:16:06.609 "data_size": 65536 00:16:06.609 }, 00:16:06.609 { 00:16:06.609 "name": "BaseBdev4", 00:16:06.609 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:06.609 "is_configured": true, 00:16:06.609 "data_offset": 0, 00:16:06.609 "data_size": 65536 00:16:06.609 } 00:16:06.609 ] 00:16:06.609 }' 00:16:06.609 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.610 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.179 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:07.179 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.179 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.179 [2024-12-12 05:54:14.499462] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.180 "name": "Existed_Raid", 00:16:07.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.180 "strip_size_kb": 64, 00:16:07.180 "state": "configuring", 00:16:07.180 "raid_level": "raid5f", 00:16:07.180 "superblock": false, 00:16:07.180 "num_base_bdevs": 4, 00:16:07.180 "num_base_bdevs_discovered": 2, 00:16:07.180 "num_base_bdevs_operational": 4, 00:16:07.180 "base_bdevs_list": [ 00:16:07.180 { 00:16:07.180 "name": "BaseBdev1", 00:16:07.180 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.180 "is_configured": false, 00:16:07.180 "data_offset": 0, 00:16:07.180 "data_size": 0 00:16:07.180 }, 00:16:07.180 { 00:16:07.180 "name": null, 00:16:07.180 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:07.180 "is_configured": false, 00:16:07.180 "data_offset": 0, 00:16:07.180 "data_size": 65536 00:16:07.180 }, 00:16:07.180 { 00:16:07.180 "name": "BaseBdev3", 00:16:07.180 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:07.180 "is_configured": true, 00:16:07.180 "data_offset": 0, 00:16:07.180 "data_size": 65536 00:16:07.180 }, 00:16:07.180 { 00:16:07.180 "name": "BaseBdev4", 00:16:07.180 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:07.180 "is_configured": true, 00:16:07.180 "data_offset": 0, 00:16:07.180 "data_size": 65536 00:16:07.180 } 00:16:07.180 ] 00:16:07.180 }' 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.180 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.439 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.439 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.439 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:07.439 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.699 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.700 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:07.700 05:54:14 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:07.700 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.700 05:54:14 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 [2024-12-12 05:54:15.029761] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:07.700 BaseBdev1 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 [ 00:16:07.700 { 00:16:07.700 "name": "BaseBdev1", 00:16:07.700 "aliases": [ 00:16:07.700 "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce" 00:16:07.700 ], 00:16:07.700 "product_name": "Malloc disk", 00:16:07.700 "block_size": 512, 00:16:07.700 "num_blocks": 65536, 00:16:07.700 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:07.700 "assigned_rate_limits": { 00:16:07.700 "rw_ios_per_sec": 0, 00:16:07.700 "rw_mbytes_per_sec": 0, 00:16:07.700 "r_mbytes_per_sec": 0, 00:16:07.700 "w_mbytes_per_sec": 0 00:16:07.700 }, 00:16:07.700 "claimed": true, 00:16:07.700 "claim_type": "exclusive_write", 00:16:07.700 "zoned": false, 00:16:07.700 "supported_io_types": { 00:16:07.700 "read": true, 00:16:07.700 "write": true, 00:16:07.700 "unmap": true, 00:16:07.700 "flush": true, 00:16:07.700 "reset": true, 00:16:07.700 "nvme_admin": false, 00:16:07.700 "nvme_io": false, 00:16:07.700 "nvme_io_md": false, 00:16:07.700 "write_zeroes": true, 00:16:07.700 "zcopy": true, 00:16:07.700 "get_zone_info": false, 00:16:07.700 "zone_management": false, 00:16:07.700 "zone_append": false, 00:16:07.700 "compare": false, 00:16:07.700 "compare_and_write": false, 00:16:07.700 "abort": true, 00:16:07.700 "seek_hole": false, 00:16:07.700 "seek_data": false, 00:16:07.700 "copy": true, 00:16:07.700 "nvme_iov_md": false 00:16:07.700 }, 00:16:07.700 "memory_domains": [ 00:16:07.700 { 00:16:07.700 "dma_device_id": "system", 00:16:07.700 "dma_device_type": 1 00:16:07.700 }, 00:16:07.700 { 00:16:07.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.700 "dma_device_type": 2 00:16:07.700 } 00:16:07.700 ], 00:16:07.700 "driver_specific": {} 00:16:07.700 } 00:16:07.700 ] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.700 "name": "Existed_Raid", 00:16:07.700 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.700 "strip_size_kb": 64, 00:16:07.700 "state": "configuring", 00:16:07.700 "raid_level": "raid5f", 00:16:07.700 "superblock": false, 00:16:07.700 "num_base_bdevs": 4, 00:16:07.700 "num_base_bdevs_discovered": 3, 00:16:07.700 "num_base_bdevs_operational": 4, 00:16:07.700 "base_bdevs_list": [ 00:16:07.700 { 00:16:07.700 "name": "BaseBdev1", 00:16:07.700 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:07.700 "is_configured": true, 00:16:07.700 "data_offset": 0, 00:16:07.700 "data_size": 65536 00:16:07.700 }, 00:16:07.700 { 00:16:07.700 "name": null, 00:16:07.700 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:07.700 "is_configured": false, 00:16:07.700 "data_offset": 0, 00:16:07.700 "data_size": 65536 00:16:07.700 }, 00:16:07.700 { 00:16:07.700 "name": "BaseBdev3", 00:16:07.700 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:07.700 "is_configured": true, 00:16:07.700 "data_offset": 0, 00:16:07.700 "data_size": 65536 00:16:07.700 }, 00:16:07.700 { 00:16:07.700 "name": "BaseBdev4", 00:16:07.700 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:07.700 "is_configured": true, 00:16:07.700 "data_offset": 0, 00:16:07.700 "data_size": 65536 00:16:07.700 } 00:16:07.700 ] 00:16:07.700 }' 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.700 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.269 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.270 [2024-12-12 05:54:15.576884] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.270 "name": "Existed_Raid", 00:16:08.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.270 "strip_size_kb": 64, 00:16:08.270 "state": "configuring", 00:16:08.270 "raid_level": "raid5f", 00:16:08.270 "superblock": false, 00:16:08.270 "num_base_bdevs": 4, 00:16:08.270 "num_base_bdevs_discovered": 2, 00:16:08.270 "num_base_bdevs_operational": 4, 00:16:08.270 "base_bdevs_list": [ 00:16:08.270 { 00:16:08.270 "name": "BaseBdev1", 00:16:08.270 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:08.270 "is_configured": true, 00:16:08.270 "data_offset": 0, 00:16:08.270 "data_size": 65536 00:16:08.270 }, 00:16:08.270 { 00:16:08.270 "name": null, 00:16:08.270 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:08.270 "is_configured": false, 00:16:08.270 "data_offset": 0, 00:16:08.270 "data_size": 65536 00:16:08.270 }, 00:16:08.270 { 00:16:08.270 "name": null, 00:16:08.270 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:08.270 "is_configured": false, 00:16:08.270 "data_offset": 0, 00:16:08.270 "data_size": 65536 00:16:08.270 }, 00:16:08.270 { 00:16:08.270 "name": "BaseBdev4", 00:16:08.270 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:08.270 "is_configured": true, 00:16:08.270 "data_offset": 0, 00:16:08.270 "data_size": 65536 00:16:08.270 } 00:16:08.270 ] 00:16:08.270 }' 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.270 05:54:15 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.530 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.790 [2024-12-12 05:54:16.056051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.790 "name": "Existed_Raid", 00:16:08.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.790 "strip_size_kb": 64, 00:16:08.790 "state": "configuring", 00:16:08.790 "raid_level": "raid5f", 00:16:08.790 "superblock": false, 00:16:08.790 "num_base_bdevs": 4, 00:16:08.790 "num_base_bdevs_discovered": 3, 00:16:08.790 "num_base_bdevs_operational": 4, 00:16:08.790 "base_bdevs_list": [ 00:16:08.790 { 00:16:08.790 "name": "BaseBdev1", 00:16:08.790 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:08.790 "is_configured": true, 00:16:08.790 "data_offset": 0, 00:16:08.790 "data_size": 65536 00:16:08.790 }, 00:16:08.790 { 00:16:08.790 "name": null, 00:16:08.790 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:08.790 "is_configured": false, 00:16:08.790 "data_offset": 0, 00:16:08.790 "data_size": 65536 00:16:08.790 }, 00:16:08.790 { 00:16:08.790 "name": "BaseBdev3", 00:16:08.790 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:08.790 "is_configured": true, 00:16:08.790 "data_offset": 0, 00:16:08.790 "data_size": 65536 00:16:08.790 }, 00:16:08.790 { 00:16:08.790 "name": "BaseBdev4", 00:16:08.790 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:08.790 "is_configured": true, 00:16:08.790 "data_offset": 0, 00:16:08.790 "data_size": 65536 00:16:08.790 } 00:16:08.790 ] 00:16:08.790 }' 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.790 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.050 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.050 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:09.050 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.050 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.050 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 [2024-12-12 05:54:16.579207] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.310 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.310 "name": "Existed_Raid", 00:16:09.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.311 "strip_size_kb": 64, 00:16:09.311 "state": "configuring", 00:16:09.311 "raid_level": "raid5f", 00:16:09.311 "superblock": false, 00:16:09.311 "num_base_bdevs": 4, 00:16:09.311 "num_base_bdevs_discovered": 2, 00:16:09.311 "num_base_bdevs_operational": 4, 00:16:09.311 "base_bdevs_list": [ 00:16:09.311 { 00:16:09.311 "name": null, 00:16:09.311 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:09.311 "is_configured": false, 00:16:09.311 "data_offset": 0, 00:16:09.311 "data_size": 65536 00:16:09.311 }, 00:16:09.311 { 00:16:09.311 "name": null, 00:16:09.311 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:09.311 "is_configured": false, 00:16:09.311 "data_offset": 0, 00:16:09.311 "data_size": 65536 00:16:09.311 }, 00:16:09.311 { 00:16:09.311 "name": "BaseBdev3", 00:16:09.311 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:09.311 "is_configured": true, 00:16:09.311 "data_offset": 0, 00:16:09.311 "data_size": 65536 00:16:09.311 }, 00:16:09.311 { 00:16:09.311 "name": "BaseBdev4", 00:16:09.311 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:09.311 "is_configured": true, 00:16:09.311 "data_offset": 0, 00:16:09.311 "data_size": 65536 00:16:09.311 } 00:16:09.311 ] 00:16:09.311 }' 00:16:09.311 05:54:16 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.311 05:54:16 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 [2024-12-12 05:54:17.150655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.880 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:09.880 "name": "Existed_Raid", 00:16:09.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:09.880 "strip_size_kb": 64, 00:16:09.880 "state": "configuring", 00:16:09.880 "raid_level": "raid5f", 00:16:09.880 "superblock": false, 00:16:09.880 "num_base_bdevs": 4, 00:16:09.880 "num_base_bdevs_discovered": 3, 00:16:09.880 "num_base_bdevs_operational": 4, 00:16:09.880 "base_bdevs_list": [ 00:16:09.880 { 00:16:09.880 "name": null, 00:16:09.880 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:09.880 "is_configured": false, 00:16:09.880 "data_offset": 0, 00:16:09.880 "data_size": 65536 00:16:09.880 }, 00:16:09.880 { 00:16:09.880 "name": "BaseBdev2", 00:16:09.880 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:09.880 "is_configured": true, 00:16:09.880 "data_offset": 0, 00:16:09.880 "data_size": 65536 00:16:09.880 }, 00:16:09.880 { 00:16:09.880 "name": "BaseBdev3", 00:16:09.880 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:09.880 "is_configured": true, 00:16:09.880 "data_offset": 0, 00:16:09.880 "data_size": 65536 00:16:09.880 }, 00:16:09.880 { 00:16:09.880 "name": "BaseBdev4", 00:16:09.881 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:09.881 "is_configured": true, 00:16:09.881 "data_offset": 0, 00:16:09.881 "data_size": 65536 00:16:09.881 } 00:16:09.881 ] 00:16:09.881 }' 00:16:09.881 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:09.881 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.140 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u f3f4d31a-74e4-499f-8d8d-f72e8ec178ce 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 [2024-12-12 05:54:17.737162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:10.400 [2024-12-12 05:54:17.737209] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:10.400 [2024-12-12 05:54:17.737216] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:10.400 [2024-12-12 05:54:17.737453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:10.400 [2024-12-12 05:54:17.744260] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:10.400 [2024-12-12 05:54:17.744324] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:10.400 [2024-12-12 05:54:17.744616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.400 NewBaseBdev 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.400 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.400 [ 00:16:10.400 { 00:16:10.400 "name": "NewBaseBdev", 00:16:10.400 "aliases": [ 00:16:10.400 "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce" 00:16:10.400 ], 00:16:10.400 "product_name": "Malloc disk", 00:16:10.400 "block_size": 512, 00:16:10.400 "num_blocks": 65536, 00:16:10.400 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:10.400 "assigned_rate_limits": { 00:16:10.400 "rw_ios_per_sec": 0, 00:16:10.401 "rw_mbytes_per_sec": 0, 00:16:10.401 "r_mbytes_per_sec": 0, 00:16:10.401 "w_mbytes_per_sec": 0 00:16:10.401 }, 00:16:10.401 "claimed": true, 00:16:10.401 "claim_type": "exclusive_write", 00:16:10.401 "zoned": false, 00:16:10.401 "supported_io_types": { 00:16:10.401 "read": true, 00:16:10.401 "write": true, 00:16:10.401 "unmap": true, 00:16:10.401 "flush": true, 00:16:10.401 "reset": true, 00:16:10.401 "nvme_admin": false, 00:16:10.401 "nvme_io": false, 00:16:10.401 "nvme_io_md": false, 00:16:10.401 "write_zeroes": true, 00:16:10.401 "zcopy": true, 00:16:10.401 "get_zone_info": false, 00:16:10.401 "zone_management": false, 00:16:10.401 "zone_append": false, 00:16:10.401 "compare": false, 00:16:10.401 "compare_and_write": false, 00:16:10.401 "abort": true, 00:16:10.401 "seek_hole": false, 00:16:10.401 "seek_data": false, 00:16:10.401 "copy": true, 00:16:10.401 "nvme_iov_md": false 00:16:10.401 }, 00:16:10.401 "memory_domains": [ 00:16:10.401 { 00:16:10.401 "dma_device_id": "system", 00:16:10.401 "dma_device_type": 1 00:16:10.401 }, 00:16:10.401 { 00:16:10.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:10.401 "dma_device_type": 2 00:16:10.401 } 00:16:10.401 ], 00:16:10.401 "driver_specific": {} 00:16:10.401 } 00:16:10.401 ] 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.401 "name": "Existed_Raid", 00:16:10.401 "uuid": "373ada5f-cef9-4d63-8ba4-4390ad264f16", 00:16:10.401 "strip_size_kb": 64, 00:16:10.401 "state": "online", 00:16:10.401 "raid_level": "raid5f", 00:16:10.401 "superblock": false, 00:16:10.401 "num_base_bdevs": 4, 00:16:10.401 "num_base_bdevs_discovered": 4, 00:16:10.401 "num_base_bdevs_operational": 4, 00:16:10.401 "base_bdevs_list": [ 00:16:10.401 { 00:16:10.401 "name": "NewBaseBdev", 00:16:10.401 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:10.401 "is_configured": true, 00:16:10.401 "data_offset": 0, 00:16:10.401 "data_size": 65536 00:16:10.401 }, 00:16:10.401 { 00:16:10.401 "name": "BaseBdev2", 00:16:10.401 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:10.401 "is_configured": true, 00:16:10.401 "data_offset": 0, 00:16:10.401 "data_size": 65536 00:16:10.401 }, 00:16:10.401 { 00:16:10.401 "name": "BaseBdev3", 00:16:10.401 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:10.401 "is_configured": true, 00:16:10.401 "data_offset": 0, 00:16:10.401 "data_size": 65536 00:16:10.401 }, 00:16:10.401 { 00:16:10.401 "name": "BaseBdev4", 00:16:10.401 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:10.401 "is_configured": true, 00:16:10.401 "data_offset": 0, 00:16:10.401 "data_size": 65536 00:16:10.401 } 00:16:10.401 ] 00:16:10.401 }' 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.401 05:54:17 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:10.661 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:10.921 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:10.921 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.921 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.921 [2024-12-12 05:54:18.188378] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.921 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.921 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:10.921 "name": "Existed_Raid", 00:16:10.921 "aliases": [ 00:16:10.921 "373ada5f-cef9-4d63-8ba4-4390ad264f16" 00:16:10.921 ], 00:16:10.921 "product_name": "Raid Volume", 00:16:10.921 "block_size": 512, 00:16:10.921 "num_blocks": 196608, 00:16:10.921 "uuid": "373ada5f-cef9-4d63-8ba4-4390ad264f16", 00:16:10.921 "assigned_rate_limits": { 00:16:10.921 "rw_ios_per_sec": 0, 00:16:10.921 "rw_mbytes_per_sec": 0, 00:16:10.921 "r_mbytes_per_sec": 0, 00:16:10.921 "w_mbytes_per_sec": 0 00:16:10.921 }, 00:16:10.921 "claimed": false, 00:16:10.921 "zoned": false, 00:16:10.921 "supported_io_types": { 00:16:10.921 "read": true, 00:16:10.921 "write": true, 00:16:10.921 "unmap": false, 00:16:10.921 "flush": false, 00:16:10.921 "reset": true, 00:16:10.921 "nvme_admin": false, 00:16:10.921 "nvme_io": false, 00:16:10.921 "nvme_io_md": false, 00:16:10.921 "write_zeroes": true, 00:16:10.921 "zcopy": false, 00:16:10.921 "get_zone_info": false, 00:16:10.921 "zone_management": false, 00:16:10.921 "zone_append": false, 00:16:10.921 "compare": false, 00:16:10.921 "compare_and_write": false, 00:16:10.921 "abort": false, 00:16:10.921 "seek_hole": false, 00:16:10.921 "seek_data": false, 00:16:10.921 "copy": false, 00:16:10.921 "nvme_iov_md": false 00:16:10.921 }, 00:16:10.921 "driver_specific": { 00:16:10.921 "raid": { 00:16:10.921 "uuid": "373ada5f-cef9-4d63-8ba4-4390ad264f16", 00:16:10.921 "strip_size_kb": 64, 00:16:10.921 "state": "online", 00:16:10.921 "raid_level": "raid5f", 00:16:10.921 "superblock": false, 00:16:10.921 "num_base_bdevs": 4, 00:16:10.921 "num_base_bdevs_discovered": 4, 00:16:10.921 "num_base_bdevs_operational": 4, 00:16:10.921 "base_bdevs_list": [ 00:16:10.921 { 00:16:10.921 "name": "NewBaseBdev", 00:16:10.921 "uuid": "f3f4d31a-74e4-499f-8d8d-f72e8ec178ce", 00:16:10.921 "is_configured": true, 00:16:10.921 "data_offset": 0, 00:16:10.921 "data_size": 65536 00:16:10.921 }, 00:16:10.921 { 00:16:10.921 "name": "BaseBdev2", 00:16:10.921 "uuid": "0ea705eb-0fbc-4898-9803-8960b18b0ed1", 00:16:10.921 "is_configured": true, 00:16:10.921 "data_offset": 0, 00:16:10.921 "data_size": 65536 00:16:10.921 }, 00:16:10.921 { 00:16:10.921 "name": "BaseBdev3", 00:16:10.921 "uuid": "ee00add2-f3b2-4651-9736-2adb2c622074", 00:16:10.921 "is_configured": true, 00:16:10.921 "data_offset": 0, 00:16:10.921 "data_size": 65536 00:16:10.921 }, 00:16:10.921 { 00:16:10.921 "name": "BaseBdev4", 00:16:10.921 "uuid": "dfdc4839-9fcd-4874-8222-9e1a7881d072", 00:16:10.922 "is_configured": true, 00:16:10.922 "data_offset": 0, 00:16:10.922 "data_size": 65536 00:16:10.922 } 00:16:10.922 ] 00:16:10.922 } 00:16:10.922 } 00:16:10.922 }' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:10.922 BaseBdev2 00:16:10.922 BaseBdev3 00:16:10.922 BaseBdev4' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.922 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.182 [2024-12-12 05:54:18.515610] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:11.182 [2024-12-12 05:54:18.515677] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:11.182 [2024-12-12 05:54:18.515745] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:11.182 [2024-12-12 05:54:18.516045] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:11.182 [2024-12-12 05:54:18.516057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 82759 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 82759 ']' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 82759 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82759 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.182 killing process with pid 82759 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82759' 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 82759 00:16:11.182 [2024-12-12 05:54:18.553075] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:11.182 05:54:18 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 82759 00:16:11.442 [2024-12-12 05:54:18.928672] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:12.824 05:54:19 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:12.824 00:16:12.824 real 0m11.328s 00:16:12.824 user 0m18.062s 00:16:12.824 sys 0m2.018s 00:16:12.824 ************************************ 00:16:12.824 END TEST raid5f_state_function_test 00:16:12.824 ************************************ 00:16:12.824 05:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.824 05:54:19 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.824 05:54:20 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:12.824 05:54:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:12.824 05:54:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.824 05:54:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:12.824 ************************************ 00:16:12.824 START TEST raid5f_state_function_test_sb 00:16:12.824 ************************************ 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:12.824 Process raid pid: 83359 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83359 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83359' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83359 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83359 ']' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:12.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:12.824 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.824 [2024-12-12 05:54:20.151580] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:16:12.825 [2024-12-12 05:54:20.151736] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:12.825 [2024-12-12 05:54:20.324391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:13.084 [2024-12-12 05:54:20.432761] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.344 [2024-12-12 05:54:20.636203] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.344 [2024-12-12 05:54:20.636246] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.604 [2024-12-12 05:54:20.977361] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.604 [2024-12-12 05:54:20.977409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.604 [2024-12-12 05:54:20.977420] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:13.604 [2024-12-12 05:54:20.977430] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:13.604 [2024-12-12 05:54:20.977436] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:13.604 [2024-12-12 05:54:20.977445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:13.604 [2024-12-12 05:54:20.977451] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:13.604 [2024-12-12 05:54:20.977459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.604 05:54:20 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.604 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.604 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.604 "name": "Existed_Raid", 00:16:13.604 "uuid": "61b0fd4e-d0ce-4392-a9f7-cb80b25aa08d", 00:16:13.604 "strip_size_kb": 64, 00:16:13.604 "state": "configuring", 00:16:13.604 "raid_level": "raid5f", 00:16:13.604 "superblock": true, 00:16:13.604 "num_base_bdevs": 4, 00:16:13.604 "num_base_bdevs_discovered": 0, 00:16:13.604 "num_base_bdevs_operational": 4, 00:16:13.604 "base_bdevs_list": [ 00:16:13.604 { 00:16:13.604 "name": "BaseBdev1", 00:16:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.604 "is_configured": false, 00:16:13.604 "data_offset": 0, 00:16:13.604 "data_size": 0 00:16:13.604 }, 00:16:13.604 { 00:16:13.604 "name": "BaseBdev2", 00:16:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.604 "is_configured": false, 00:16:13.604 "data_offset": 0, 00:16:13.604 "data_size": 0 00:16:13.604 }, 00:16:13.604 { 00:16:13.604 "name": "BaseBdev3", 00:16:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.604 "is_configured": false, 00:16:13.604 "data_offset": 0, 00:16:13.604 "data_size": 0 00:16:13.604 }, 00:16:13.604 { 00:16:13.604 "name": "BaseBdev4", 00:16:13.604 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.604 "is_configured": false, 00:16:13.604 "data_offset": 0, 00:16:13.604 "data_size": 0 00:16:13.604 } 00:16:13.604 ] 00:16:13.604 }' 00:16:13.604 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.604 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.173 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.173 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.173 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 [2024-12-12 05:54:21.400558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.174 [2024-12-12 05:54:21.400596] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 [2024-12-12 05:54:21.412555] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:14.174 [2024-12-12 05:54:21.412591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:14.174 [2024-12-12 05:54:21.412599] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.174 [2024-12-12 05:54:21.412608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.174 [2024-12-12 05:54:21.412614] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.174 [2024-12-12 05:54:21.412622] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.174 [2024-12-12 05:54:21.412628] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:14.174 [2024-12-12 05:54:21.412636] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 [2024-12-12 05:54:21.458161] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.174 BaseBdev1 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 [ 00:16:14.174 { 00:16:14.174 "name": "BaseBdev1", 00:16:14.174 "aliases": [ 00:16:14.174 "5868dce6-5427-498a-8b6c-9fdf4033b5f4" 00:16:14.174 ], 00:16:14.174 "product_name": "Malloc disk", 00:16:14.174 "block_size": 512, 00:16:14.174 "num_blocks": 65536, 00:16:14.174 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:14.174 "assigned_rate_limits": { 00:16:14.174 "rw_ios_per_sec": 0, 00:16:14.174 "rw_mbytes_per_sec": 0, 00:16:14.174 "r_mbytes_per_sec": 0, 00:16:14.174 "w_mbytes_per_sec": 0 00:16:14.174 }, 00:16:14.174 "claimed": true, 00:16:14.174 "claim_type": "exclusive_write", 00:16:14.174 "zoned": false, 00:16:14.174 "supported_io_types": { 00:16:14.174 "read": true, 00:16:14.174 "write": true, 00:16:14.174 "unmap": true, 00:16:14.174 "flush": true, 00:16:14.174 "reset": true, 00:16:14.174 "nvme_admin": false, 00:16:14.174 "nvme_io": false, 00:16:14.174 "nvme_io_md": false, 00:16:14.174 "write_zeroes": true, 00:16:14.174 "zcopy": true, 00:16:14.174 "get_zone_info": false, 00:16:14.174 "zone_management": false, 00:16:14.174 "zone_append": false, 00:16:14.174 "compare": false, 00:16:14.174 "compare_and_write": false, 00:16:14.174 "abort": true, 00:16:14.174 "seek_hole": false, 00:16:14.174 "seek_data": false, 00:16:14.174 "copy": true, 00:16:14.174 "nvme_iov_md": false 00:16:14.174 }, 00:16:14.174 "memory_domains": [ 00:16:14.174 { 00:16:14.174 "dma_device_id": "system", 00:16:14.174 "dma_device_type": 1 00:16:14.174 }, 00:16:14.174 { 00:16:14.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.174 "dma_device_type": 2 00:16:14.174 } 00:16:14.174 ], 00:16:14.174 "driver_specific": {} 00:16:14.174 } 00:16:14.174 ] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.174 "name": "Existed_Raid", 00:16:14.174 "uuid": "5b67bda2-41cd-41c1-b8fd-cce307e6e8e6", 00:16:14.174 "strip_size_kb": 64, 00:16:14.174 "state": "configuring", 00:16:14.174 "raid_level": "raid5f", 00:16:14.174 "superblock": true, 00:16:14.174 "num_base_bdevs": 4, 00:16:14.174 "num_base_bdevs_discovered": 1, 00:16:14.174 "num_base_bdevs_operational": 4, 00:16:14.174 "base_bdevs_list": [ 00:16:14.174 { 00:16:14.174 "name": "BaseBdev1", 00:16:14.174 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:14.174 "is_configured": true, 00:16:14.174 "data_offset": 2048, 00:16:14.174 "data_size": 63488 00:16:14.174 }, 00:16:14.174 { 00:16:14.174 "name": "BaseBdev2", 00:16:14.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.174 "is_configured": false, 00:16:14.174 "data_offset": 0, 00:16:14.174 "data_size": 0 00:16:14.174 }, 00:16:14.174 { 00:16:14.174 "name": "BaseBdev3", 00:16:14.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.174 "is_configured": false, 00:16:14.174 "data_offset": 0, 00:16:14.174 "data_size": 0 00:16:14.174 }, 00:16:14.174 { 00:16:14.174 "name": "BaseBdev4", 00:16:14.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.174 "is_configured": false, 00:16:14.174 "data_offset": 0, 00:16:14.174 "data_size": 0 00:16:14.174 } 00:16:14.174 ] 00:16:14.174 }' 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.174 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.434 [2024-12-12 05:54:21.929381] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:14.434 [2024-12-12 05:54:21.929442] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.434 [2024-12-12 05:54:21.937434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:14.434 [2024-12-12 05:54:21.939158] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:14.434 [2024-12-12 05:54:21.939197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:14.434 [2024-12-12 05:54:21.939207] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:14.434 [2024-12-12 05:54:21.939216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:14.434 [2024-12-12 05:54:21.939223] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:14.434 [2024-12-12 05:54:21.939230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.434 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.435 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.695 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.695 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.695 "name": "Existed_Raid", 00:16:14.695 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:14.695 "strip_size_kb": 64, 00:16:14.695 "state": "configuring", 00:16:14.695 "raid_level": "raid5f", 00:16:14.695 "superblock": true, 00:16:14.695 "num_base_bdevs": 4, 00:16:14.695 "num_base_bdevs_discovered": 1, 00:16:14.695 "num_base_bdevs_operational": 4, 00:16:14.695 "base_bdevs_list": [ 00:16:14.695 { 00:16:14.695 "name": "BaseBdev1", 00:16:14.695 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:14.695 "is_configured": true, 00:16:14.695 "data_offset": 2048, 00:16:14.695 "data_size": 63488 00:16:14.695 }, 00:16:14.695 { 00:16:14.695 "name": "BaseBdev2", 00:16:14.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.695 "is_configured": false, 00:16:14.695 "data_offset": 0, 00:16:14.695 "data_size": 0 00:16:14.695 }, 00:16:14.695 { 00:16:14.695 "name": "BaseBdev3", 00:16:14.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.695 "is_configured": false, 00:16:14.695 "data_offset": 0, 00:16:14.695 "data_size": 0 00:16:14.695 }, 00:16:14.695 { 00:16:14.695 "name": "BaseBdev4", 00:16:14.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.695 "is_configured": false, 00:16:14.695 "data_offset": 0, 00:16:14.695 "data_size": 0 00:16:14.695 } 00:16:14.695 ] 00:16:14.695 }' 00:16:14.695 05:54:21 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.695 05:54:21 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.954 [2024-12-12 05:54:22.389740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:14.954 BaseBdev2 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.954 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.954 [ 00:16:14.954 { 00:16:14.954 "name": "BaseBdev2", 00:16:14.954 "aliases": [ 00:16:14.954 "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0" 00:16:14.954 ], 00:16:14.954 "product_name": "Malloc disk", 00:16:14.954 "block_size": 512, 00:16:14.954 "num_blocks": 65536, 00:16:14.954 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:14.954 "assigned_rate_limits": { 00:16:14.954 "rw_ios_per_sec": 0, 00:16:14.954 "rw_mbytes_per_sec": 0, 00:16:14.954 "r_mbytes_per_sec": 0, 00:16:14.954 "w_mbytes_per_sec": 0 00:16:14.954 }, 00:16:14.954 "claimed": true, 00:16:14.954 "claim_type": "exclusive_write", 00:16:14.954 "zoned": false, 00:16:14.954 "supported_io_types": { 00:16:14.954 "read": true, 00:16:14.954 "write": true, 00:16:14.954 "unmap": true, 00:16:14.954 "flush": true, 00:16:14.954 "reset": true, 00:16:14.954 "nvme_admin": false, 00:16:14.954 "nvme_io": false, 00:16:14.954 "nvme_io_md": false, 00:16:14.954 "write_zeroes": true, 00:16:14.954 "zcopy": true, 00:16:14.954 "get_zone_info": false, 00:16:14.954 "zone_management": false, 00:16:14.954 "zone_append": false, 00:16:14.954 "compare": false, 00:16:14.954 "compare_and_write": false, 00:16:14.954 "abort": true, 00:16:14.955 "seek_hole": false, 00:16:14.955 "seek_data": false, 00:16:14.955 "copy": true, 00:16:14.955 "nvme_iov_md": false 00:16:14.955 }, 00:16:14.955 "memory_domains": [ 00:16:14.955 { 00:16:14.955 "dma_device_id": "system", 00:16:14.955 "dma_device_type": 1 00:16:14.955 }, 00:16:14.955 { 00:16:14.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.955 "dma_device_type": 2 00:16:14.955 } 00:16:14.955 ], 00:16:14.955 "driver_specific": {} 00:16:14.955 } 00:16:14.955 ] 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.955 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.213 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.213 "name": "Existed_Raid", 00:16:15.213 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:15.213 "strip_size_kb": 64, 00:16:15.213 "state": "configuring", 00:16:15.213 "raid_level": "raid5f", 00:16:15.213 "superblock": true, 00:16:15.213 "num_base_bdevs": 4, 00:16:15.213 "num_base_bdevs_discovered": 2, 00:16:15.213 "num_base_bdevs_operational": 4, 00:16:15.213 "base_bdevs_list": [ 00:16:15.213 { 00:16:15.213 "name": "BaseBdev1", 00:16:15.213 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:15.213 "is_configured": true, 00:16:15.214 "data_offset": 2048, 00:16:15.214 "data_size": 63488 00:16:15.214 }, 00:16:15.214 { 00:16:15.214 "name": "BaseBdev2", 00:16:15.214 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:15.214 "is_configured": true, 00:16:15.214 "data_offset": 2048, 00:16:15.214 "data_size": 63488 00:16:15.214 }, 00:16:15.214 { 00:16:15.214 "name": "BaseBdev3", 00:16:15.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.214 "is_configured": false, 00:16:15.214 "data_offset": 0, 00:16:15.214 "data_size": 0 00:16:15.214 }, 00:16:15.214 { 00:16:15.214 "name": "BaseBdev4", 00:16:15.214 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.214 "is_configured": false, 00:16:15.214 "data_offset": 0, 00:16:15.214 "data_size": 0 00:16:15.214 } 00:16:15.214 ] 00:16:15.214 }' 00:16:15.214 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.214 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.473 [2024-12-12 05:54:22.910527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:15.473 BaseBdev3 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.473 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.473 [ 00:16:15.473 { 00:16:15.473 "name": "BaseBdev3", 00:16:15.473 "aliases": [ 00:16:15.473 "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5" 00:16:15.473 ], 00:16:15.473 "product_name": "Malloc disk", 00:16:15.473 "block_size": 512, 00:16:15.473 "num_blocks": 65536, 00:16:15.473 "uuid": "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5", 00:16:15.473 "assigned_rate_limits": { 00:16:15.473 "rw_ios_per_sec": 0, 00:16:15.473 "rw_mbytes_per_sec": 0, 00:16:15.473 "r_mbytes_per_sec": 0, 00:16:15.473 "w_mbytes_per_sec": 0 00:16:15.473 }, 00:16:15.473 "claimed": true, 00:16:15.473 "claim_type": "exclusive_write", 00:16:15.473 "zoned": false, 00:16:15.474 "supported_io_types": { 00:16:15.474 "read": true, 00:16:15.474 "write": true, 00:16:15.474 "unmap": true, 00:16:15.474 "flush": true, 00:16:15.474 "reset": true, 00:16:15.474 "nvme_admin": false, 00:16:15.474 "nvme_io": false, 00:16:15.474 "nvme_io_md": false, 00:16:15.474 "write_zeroes": true, 00:16:15.474 "zcopy": true, 00:16:15.474 "get_zone_info": false, 00:16:15.474 "zone_management": false, 00:16:15.474 "zone_append": false, 00:16:15.474 "compare": false, 00:16:15.474 "compare_and_write": false, 00:16:15.474 "abort": true, 00:16:15.474 "seek_hole": false, 00:16:15.474 "seek_data": false, 00:16:15.474 "copy": true, 00:16:15.474 "nvme_iov_md": false 00:16:15.474 }, 00:16:15.474 "memory_domains": [ 00:16:15.474 { 00:16:15.474 "dma_device_id": "system", 00:16:15.474 "dma_device_type": 1 00:16:15.474 }, 00:16:15.474 { 00:16:15.474 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.474 "dma_device_type": 2 00:16:15.474 } 00:16:15.474 ], 00:16:15.474 "driver_specific": {} 00:16:15.474 } 00:16:15.474 ] 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.474 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.733 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.733 "name": "Existed_Raid", 00:16:15.733 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:15.733 "strip_size_kb": 64, 00:16:15.733 "state": "configuring", 00:16:15.733 "raid_level": "raid5f", 00:16:15.733 "superblock": true, 00:16:15.733 "num_base_bdevs": 4, 00:16:15.733 "num_base_bdevs_discovered": 3, 00:16:15.733 "num_base_bdevs_operational": 4, 00:16:15.733 "base_bdevs_list": [ 00:16:15.733 { 00:16:15.733 "name": "BaseBdev1", 00:16:15.733 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:15.733 "is_configured": true, 00:16:15.733 "data_offset": 2048, 00:16:15.733 "data_size": 63488 00:16:15.733 }, 00:16:15.733 { 00:16:15.733 "name": "BaseBdev2", 00:16:15.733 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:15.733 "is_configured": true, 00:16:15.733 "data_offset": 2048, 00:16:15.733 "data_size": 63488 00:16:15.733 }, 00:16:15.733 { 00:16:15.733 "name": "BaseBdev3", 00:16:15.733 "uuid": "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5", 00:16:15.733 "is_configured": true, 00:16:15.733 "data_offset": 2048, 00:16:15.733 "data_size": 63488 00:16:15.733 }, 00:16:15.733 { 00:16:15.733 "name": "BaseBdev4", 00:16:15.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:15.733 "is_configured": false, 00:16:15.733 "data_offset": 0, 00:16:15.733 "data_size": 0 00:16:15.733 } 00:16:15.733 ] 00:16:15.733 }' 00:16:15.733 05:54:22 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.733 05:54:22 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.992 [2024-12-12 05:54:23.416268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:15.992 [2024-12-12 05:54:23.416586] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:15.992 [2024-12-12 05:54:23.416622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:15.992 [2024-12-12 05:54:23.416911] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:15.992 BaseBdev4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.992 [2024-12-12 05:54:23.424204] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:15.992 [2024-12-12 05:54:23.424233] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:15.992 [2024-12-12 05:54:23.424499] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.992 [ 00:16:15.992 { 00:16:15.992 "name": "BaseBdev4", 00:16:15.992 "aliases": [ 00:16:15.992 "749282c3-7f57-4013-bbff-d43d11b59ca2" 00:16:15.992 ], 00:16:15.992 "product_name": "Malloc disk", 00:16:15.992 "block_size": 512, 00:16:15.992 "num_blocks": 65536, 00:16:15.992 "uuid": "749282c3-7f57-4013-bbff-d43d11b59ca2", 00:16:15.992 "assigned_rate_limits": { 00:16:15.992 "rw_ios_per_sec": 0, 00:16:15.992 "rw_mbytes_per_sec": 0, 00:16:15.992 "r_mbytes_per_sec": 0, 00:16:15.992 "w_mbytes_per_sec": 0 00:16:15.992 }, 00:16:15.992 "claimed": true, 00:16:15.992 "claim_type": "exclusive_write", 00:16:15.992 "zoned": false, 00:16:15.992 "supported_io_types": { 00:16:15.992 "read": true, 00:16:15.992 "write": true, 00:16:15.992 "unmap": true, 00:16:15.992 "flush": true, 00:16:15.992 "reset": true, 00:16:15.992 "nvme_admin": false, 00:16:15.992 "nvme_io": false, 00:16:15.992 "nvme_io_md": false, 00:16:15.992 "write_zeroes": true, 00:16:15.992 "zcopy": true, 00:16:15.992 "get_zone_info": false, 00:16:15.992 "zone_management": false, 00:16:15.992 "zone_append": false, 00:16:15.992 "compare": false, 00:16:15.992 "compare_and_write": false, 00:16:15.992 "abort": true, 00:16:15.992 "seek_hole": false, 00:16:15.992 "seek_data": false, 00:16:15.992 "copy": true, 00:16:15.992 "nvme_iov_md": false 00:16:15.992 }, 00:16:15.992 "memory_domains": [ 00:16:15.992 { 00:16:15.992 "dma_device_id": "system", 00:16:15.992 "dma_device_type": 1 00:16:15.992 }, 00:16:15.992 { 00:16:15.992 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.992 "dma_device_type": 2 00:16:15.992 } 00:16:15.992 ], 00:16:15.992 "driver_specific": {} 00:16:15.992 } 00:16:15.992 ] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:15.992 "name": "Existed_Raid", 00:16:15.992 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:15.992 "strip_size_kb": 64, 00:16:15.992 "state": "online", 00:16:15.992 "raid_level": "raid5f", 00:16:15.992 "superblock": true, 00:16:15.992 "num_base_bdevs": 4, 00:16:15.992 "num_base_bdevs_discovered": 4, 00:16:15.992 "num_base_bdevs_operational": 4, 00:16:15.992 "base_bdevs_list": [ 00:16:15.992 { 00:16:15.992 "name": "BaseBdev1", 00:16:15.992 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:15.992 "is_configured": true, 00:16:15.992 "data_offset": 2048, 00:16:15.992 "data_size": 63488 00:16:15.992 }, 00:16:15.992 { 00:16:15.992 "name": "BaseBdev2", 00:16:15.992 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:15.992 "is_configured": true, 00:16:15.992 "data_offset": 2048, 00:16:15.992 "data_size": 63488 00:16:15.992 }, 00:16:15.992 { 00:16:15.992 "name": "BaseBdev3", 00:16:15.992 "uuid": "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5", 00:16:15.992 "is_configured": true, 00:16:15.992 "data_offset": 2048, 00:16:15.992 "data_size": 63488 00:16:15.992 }, 00:16:15.992 { 00:16:15.992 "name": "BaseBdev4", 00:16:15.992 "uuid": "749282c3-7f57-4013-bbff-d43d11b59ca2", 00:16:15.992 "is_configured": true, 00:16:15.992 "data_offset": 2048, 00:16:15.992 "data_size": 63488 00:16:15.992 } 00:16:15.992 ] 00:16:15.992 }' 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:15.992 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.560 [2024-12-12 05:54:23.851868] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:16.560 "name": "Existed_Raid", 00:16:16.560 "aliases": [ 00:16:16.560 "e714c7e0-bb86-40f9-a3b4-bd04712c113f" 00:16:16.560 ], 00:16:16.560 "product_name": "Raid Volume", 00:16:16.560 "block_size": 512, 00:16:16.560 "num_blocks": 190464, 00:16:16.560 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:16.560 "assigned_rate_limits": { 00:16:16.560 "rw_ios_per_sec": 0, 00:16:16.560 "rw_mbytes_per_sec": 0, 00:16:16.560 "r_mbytes_per_sec": 0, 00:16:16.560 "w_mbytes_per_sec": 0 00:16:16.560 }, 00:16:16.560 "claimed": false, 00:16:16.560 "zoned": false, 00:16:16.560 "supported_io_types": { 00:16:16.560 "read": true, 00:16:16.560 "write": true, 00:16:16.560 "unmap": false, 00:16:16.560 "flush": false, 00:16:16.560 "reset": true, 00:16:16.560 "nvme_admin": false, 00:16:16.560 "nvme_io": false, 00:16:16.560 "nvme_io_md": false, 00:16:16.560 "write_zeroes": true, 00:16:16.560 "zcopy": false, 00:16:16.560 "get_zone_info": false, 00:16:16.560 "zone_management": false, 00:16:16.560 "zone_append": false, 00:16:16.560 "compare": false, 00:16:16.560 "compare_and_write": false, 00:16:16.560 "abort": false, 00:16:16.560 "seek_hole": false, 00:16:16.560 "seek_data": false, 00:16:16.560 "copy": false, 00:16:16.560 "nvme_iov_md": false 00:16:16.560 }, 00:16:16.560 "driver_specific": { 00:16:16.560 "raid": { 00:16:16.560 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:16.560 "strip_size_kb": 64, 00:16:16.560 "state": "online", 00:16:16.560 "raid_level": "raid5f", 00:16:16.560 "superblock": true, 00:16:16.560 "num_base_bdevs": 4, 00:16:16.560 "num_base_bdevs_discovered": 4, 00:16:16.560 "num_base_bdevs_operational": 4, 00:16:16.560 "base_bdevs_list": [ 00:16:16.560 { 00:16:16.560 "name": "BaseBdev1", 00:16:16.560 "uuid": "5868dce6-5427-498a-8b6c-9fdf4033b5f4", 00:16:16.560 "is_configured": true, 00:16:16.560 "data_offset": 2048, 00:16:16.560 "data_size": 63488 00:16:16.560 }, 00:16:16.560 { 00:16:16.560 "name": "BaseBdev2", 00:16:16.560 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:16.560 "is_configured": true, 00:16:16.560 "data_offset": 2048, 00:16:16.560 "data_size": 63488 00:16:16.560 }, 00:16:16.560 { 00:16:16.560 "name": "BaseBdev3", 00:16:16.560 "uuid": "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5", 00:16:16.560 "is_configured": true, 00:16:16.560 "data_offset": 2048, 00:16:16.560 "data_size": 63488 00:16:16.560 }, 00:16:16.560 { 00:16:16.560 "name": "BaseBdev4", 00:16:16.560 "uuid": "749282c3-7f57-4013-bbff-d43d11b59ca2", 00:16:16.560 "is_configured": true, 00:16:16.560 "data_offset": 2048, 00:16:16.560 "data_size": 63488 00:16:16.560 } 00:16:16.560 ] 00:16:16.560 } 00:16:16.560 } 00:16:16.560 }' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:16.560 BaseBdev2 00:16:16.560 BaseBdev3 00:16:16.560 BaseBdev4' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:16.560 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.561 05:54:23 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.561 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.820 [2024-12-12 05:54:24.127271] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:16.820 "name": "Existed_Raid", 00:16:16.820 "uuid": "e714c7e0-bb86-40f9-a3b4-bd04712c113f", 00:16:16.820 "strip_size_kb": 64, 00:16:16.820 "state": "online", 00:16:16.820 "raid_level": "raid5f", 00:16:16.820 "superblock": true, 00:16:16.820 "num_base_bdevs": 4, 00:16:16.820 "num_base_bdevs_discovered": 3, 00:16:16.820 "num_base_bdevs_operational": 3, 00:16:16.820 "base_bdevs_list": [ 00:16:16.820 { 00:16:16.820 "name": null, 00:16:16.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:16.820 "is_configured": false, 00:16:16.820 "data_offset": 0, 00:16:16.820 "data_size": 63488 00:16:16.820 }, 00:16:16.820 { 00:16:16.820 "name": "BaseBdev2", 00:16:16.820 "uuid": "a1cda3d3-3b77-4df5-b4dd-05396f9e5de0", 00:16:16.820 "is_configured": true, 00:16:16.820 "data_offset": 2048, 00:16:16.820 "data_size": 63488 00:16:16.820 }, 00:16:16.820 { 00:16:16.820 "name": "BaseBdev3", 00:16:16.820 "uuid": "a5255c5a-20d5-47f2-ae05-e24d3e58ddc5", 00:16:16.820 "is_configured": true, 00:16:16.820 "data_offset": 2048, 00:16:16.820 "data_size": 63488 00:16:16.820 }, 00:16:16.820 { 00:16:16.820 "name": "BaseBdev4", 00:16:16.820 "uuid": "749282c3-7f57-4013-bbff-d43d11b59ca2", 00:16:16.820 "is_configured": true, 00:16:16.820 "data_offset": 2048, 00:16:16.820 "data_size": 63488 00:16:16.820 } 00:16:16.820 ] 00:16:16.820 }' 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:16.820 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 [2024-12-12 05:54:24.679445] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:17.388 [2024-12-12 05:54:24.679622] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:17.388 [2024-12-12 05:54:24.768235] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.388 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.388 [2024-12-12 05:54:24.824122] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:24 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 [2024-12-12 05:54:24.974384] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:17.649 [2024-12-12 05:54:24.974434] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 BaseBdev2 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.649 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.910 [ 00:16:17.910 { 00:16:17.910 "name": "BaseBdev2", 00:16:17.910 "aliases": [ 00:16:17.910 "c4245544-0812-49f4-8559-a7c26744f828" 00:16:17.910 ], 00:16:17.910 "product_name": "Malloc disk", 00:16:17.910 "block_size": 512, 00:16:17.910 "num_blocks": 65536, 00:16:17.910 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:17.910 "assigned_rate_limits": { 00:16:17.910 "rw_ios_per_sec": 0, 00:16:17.910 "rw_mbytes_per_sec": 0, 00:16:17.910 "r_mbytes_per_sec": 0, 00:16:17.910 "w_mbytes_per_sec": 0 00:16:17.910 }, 00:16:17.910 "claimed": false, 00:16:17.910 "zoned": false, 00:16:17.910 "supported_io_types": { 00:16:17.910 "read": true, 00:16:17.910 "write": true, 00:16:17.910 "unmap": true, 00:16:17.910 "flush": true, 00:16:17.910 "reset": true, 00:16:17.910 "nvme_admin": false, 00:16:17.910 "nvme_io": false, 00:16:17.910 "nvme_io_md": false, 00:16:17.910 "write_zeroes": true, 00:16:17.910 "zcopy": true, 00:16:17.910 "get_zone_info": false, 00:16:17.910 "zone_management": false, 00:16:17.910 "zone_append": false, 00:16:17.910 "compare": false, 00:16:17.910 "compare_and_write": false, 00:16:17.910 "abort": true, 00:16:17.910 "seek_hole": false, 00:16:17.910 "seek_data": false, 00:16:17.910 "copy": true, 00:16:17.910 "nvme_iov_md": false 00:16:17.910 }, 00:16:17.910 "memory_domains": [ 00:16:17.910 { 00:16:17.910 "dma_device_id": "system", 00:16:17.910 "dma_device_type": 1 00:16:17.910 }, 00:16:17.910 { 00:16:17.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.910 "dma_device_type": 2 00:16:17.910 } 00:16:17.910 ], 00:16:17.910 "driver_specific": {} 00:16:17.910 } 00:16:17.910 ] 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.910 BaseBdev3 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.910 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.910 [ 00:16:17.910 { 00:16:17.910 "name": "BaseBdev3", 00:16:17.910 "aliases": [ 00:16:17.910 "17200211-34cc-4c41-b704-5636fa5b31a8" 00:16:17.910 ], 00:16:17.910 "product_name": "Malloc disk", 00:16:17.911 "block_size": 512, 00:16:17.911 "num_blocks": 65536, 00:16:17.911 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:17.911 "assigned_rate_limits": { 00:16:17.911 "rw_ios_per_sec": 0, 00:16:17.911 "rw_mbytes_per_sec": 0, 00:16:17.911 "r_mbytes_per_sec": 0, 00:16:17.911 "w_mbytes_per_sec": 0 00:16:17.911 }, 00:16:17.911 "claimed": false, 00:16:17.911 "zoned": false, 00:16:17.911 "supported_io_types": { 00:16:17.911 "read": true, 00:16:17.911 "write": true, 00:16:17.911 "unmap": true, 00:16:17.911 "flush": true, 00:16:17.911 "reset": true, 00:16:17.911 "nvme_admin": false, 00:16:17.911 "nvme_io": false, 00:16:17.911 "nvme_io_md": false, 00:16:17.911 "write_zeroes": true, 00:16:17.911 "zcopy": true, 00:16:17.911 "get_zone_info": false, 00:16:17.911 "zone_management": false, 00:16:17.911 "zone_append": false, 00:16:17.911 "compare": false, 00:16:17.911 "compare_and_write": false, 00:16:17.911 "abort": true, 00:16:17.911 "seek_hole": false, 00:16:17.911 "seek_data": false, 00:16:17.911 "copy": true, 00:16:17.911 "nvme_iov_md": false 00:16:17.911 }, 00:16:17.911 "memory_domains": [ 00:16:17.911 { 00:16:17.911 "dma_device_id": "system", 00:16:17.911 "dma_device_type": 1 00:16:17.911 }, 00:16:17.911 { 00:16:17.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.911 "dma_device_type": 2 00:16:17.911 } 00:16:17.911 ], 00:16:17.911 "driver_specific": {} 00:16:17.911 } 00:16:17.911 ] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 BaseBdev4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 [ 00:16:17.911 { 00:16:17.911 "name": "BaseBdev4", 00:16:17.911 "aliases": [ 00:16:17.911 "a701f8f4-4f29-4fad-856a-ade0820edb31" 00:16:17.911 ], 00:16:17.911 "product_name": "Malloc disk", 00:16:17.911 "block_size": 512, 00:16:17.911 "num_blocks": 65536, 00:16:17.911 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:17.911 "assigned_rate_limits": { 00:16:17.911 "rw_ios_per_sec": 0, 00:16:17.911 "rw_mbytes_per_sec": 0, 00:16:17.911 "r_mbytes_per_sec": 0, 00:16:17.911 "w_mbytes_per_sec": 0 00:16:17.911 }, 00:16:17.911 "claimed": false, 00:16:17.911 "zoned": false, 00:16:17.911 "supported_io_types": { 00:16:17.911 "read": true, 00:16:17.911 "write": true, 00:16:17.911 "unmap": true, 00:16:17.911 "flush": true, 00:16:17.911 "reset": true, 00:16:17.911 "nvme_admin": false, 00:16:17.911 "nvme_io": false, 00:16:17.911 "nvme_io_md": false, 00:16:17.911 "write_zeroes": true, 00:16:17.911 "zcopy": true, 00:16:17.911 "get_zone_info": false, 00:16:17.911 "zone_management": false, 00:16:17.911 "zone_append": false, 00:16:17.911 "compare": false, 00:16:17.911 "compare_and_write": false, 00:16:17.911 "abort": true, 00:16:17.911 "seek_hole": false, 00:16:17.911 "seek_data": false, 00:16:17.911 "copy": true, 00:16:17.911 "nvme_iov_md": false 00:16:17.911 }, 00:16:17.911 "memory_domains": [ 00:16:17.911 { 00:16:17.911 "dma_device_id": "system", 00:16:17.911 "dma_device_type": 1 00:16:17.911 }, 00:16:17.911 { 00:16:17.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.911 "dma_device_type": 2 00:16:17.911 } 00:16:17.911 ], 00:16:17.911 "driver_specific": {} 00:16:17.911 } 00:16:17.911 ] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 [2024-12-12 05:54:25.342536] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:17.911 [2024-12-12 05:54:25.342575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:17.911 [2024-12-12 05:54:25.342597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.911 [2024-12-12 05:54:25.344299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.911 [2024-12-12 05:54:25.344355] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.911 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.911 "name": "Existed_Raid", 00:16:17.911 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:17.911 "strip_size_kb": 64, 00:16:17.911 "state": "configuring", 00:16:17.911 "raid_level": "raid5f", 00:16:17.911 "superblock": true, 00:16:17.911 "num_base_bdevs": 4, 00:16:17.911 "num_base_bdevs_discovered": 3, 00:16:17.911 "num_base_bdevs_operational": 4, 00:16:17.911 "base_bdevs_list": [ 00:16:17.911 { 00:16:17.911 "name": "BaseBdev1", 00:16:17.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:17.911 "is_configured": false, 00:16:17.911 "data_offset": 0, 00:16:17.911 "data_size": 0 00:16:17.911 }, 00:16:17.911 { 00:16:17.911 "name": "BaseBdev2", 00:16:17.911 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:17.911 "is_configured": true, 00:16:17.912 "data_offset": 2048, 00:16:17.912 "data_size": 63488 00:16:17.912 }, 00:16:17.912 { 00:16:17.912 "name": "BaseBdev3", 00:16:17.912 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:17.912 "is_configured": true, 00:16:17.912 "data_offset": 2048, 00:16:17.912 "data_size": 63488 00:16:17.912 }, 00:16:17.912 { 00:16:17.912 "name": "BaseBdev4", 00:16:17.912 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:17.912 "is_configured": true, 00:16:17.912 "data_offset": 2048, 00:16:17.912 "data_size": 63488 00:16:17.912 } 00:16:17.912 ] 00:16:17.912 }' 00:16:17.912 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.912 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.488 [2024-12-12 05:54:25.709849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.488 "name": "Existed_Raid", 00:16:18.488 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:18.488 "strip_size_kb": 64, 00:16:18.488 "state": "configuring", 00:16:18.488 "raid_level": "raid5f", 00:16:18.488 "superblock": true, 00:16:18.488 "num_base_bdevs": 4, 00:16:18.488 "num_base_bdevs_discovered": 2, 00:16:18.488 "num_base_bdevs_operational": 4, 00:16:18.488 "base_bdevs_list": [ 00:16:18.488 { 00:16:18.488 "name": "BaseBdev1", 00:16:18.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.488 "is_configured": false, 00:16:18.488 "data_offset": 0, 00:16:18.488 "data_size": 0 00:16:18.488 }, 00:16:18.488 { 00:16:18.488 "name": null, 00:16:18.488 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:18.488 "is_configured": false, 00:16:18.488 "data_offset": 0, 00:16:18.488 "data_size": 63488 00:16:18.488 }, 00:16:18.488 { 00:16:18.488 "name": "BaseBdev3", 00:16:18.488 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:18.488 "is_configured": true, 00:16:18.488 "data_offset": 2048, 00:16:18.488 "data_size": 63488 00:16:18.488 }, 00:16:18.488 { 00:16:18.488 "name": "BaseBdev4", 00:16:18.488 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:18.488 "is_configured": true, 00:16:18.488 "data_offset": 2048, 00:16:18.488 "data_size": 63488 00:16:18.488 } 00:16:18.488 ] 00:16:18.488 }' 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.488 05:54:25 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 [2024-12-12 05:54:26.248565] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:18.776 BaseBdev1 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 [ 00:16:18.776 { 00:16:18.776 "name": "BaseBdev1", 00:16:18.776 "aliases": [ 00:16:18.776 "be4e01be-c829-4a5e-89c5-d65fdecde009" 00:16:18.776 ], 00:16:18.776 "product_name": "Malloc disk", 00:16:18.776 "block_size": 512, 00:16:18.776 "num_blocks": 65536, 00:16:18.776 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:18.776 "assigned_rate_limits": { 00:16:18.776 "rw_ios_per_sec": 0, 00:16:18.776 "rw_mbytes_per_sec": 0, 00:16:18.776 "r_mbytes_per_sec": 0, 00:16:18.776 "w_mbytes_per_sec": 0 00:16:18.776 }, 00:16:18.776 "claimed": true, 00:16:18.776 "claim_type": "exclusive_write", 00:16:18.776 "zoned": false, 00:16:18.776 "supported_io_types": { 00:16:18.776 "read": true, 00:16:18.776 "write": true, 00:16:18.776 "unmap": true, 00:16:18.776 "flush": true, 00:16:18.776 "reset": true, 00:16:18.776 "nvme_admin": false, 00:16:18.776 "nvme_io": false, 00:16:18.776 "nvme_io_md": false, 00:16:18.776 "write_zeroes": true, 00:16:18.776 "zcopy": true, 00:16:18.776 "get_zone_info": false, 00:16:18.776 "zone_management": false, 00:16:18.776 "zone_append": false, 00:16:18.776 "compare": false, 00:16:18.776 "compare_and_write": false, 00:16:18.776 "abort": true, 00:16:18.776 "seek_hole": false, 00:16:18.776 "seek_data": false, 00:16:18.776 "copy": true, 00:16:18.776 "nvme_iov_md": false 00:16:18.776 }, 00:16:18.776 "memory_domains": [ 00:16:18.776 { 00:16:18.776 "dma_device_id": "system", 00:16:18.776 "dma_device_type": 1 00:16:18.776 }, 00:16:18.776 { 00:16:18.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.776 "dma_device_type": 2 00:16:18.776 } 00:16:18.776 ], 00:16:18.776 "driver_specific": {} 00:16:18.776 } 00:16:18.776 ] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.036 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.036 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.036 "name": "Existed_Raid", 00:16:19.036 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:19.036 "strip_size_kb": 64, 00:16:19.036 "state": "configuring", 00:16:19.036 "raid_level": "raid5f", 00:16:19.036 "superblock": true, 00:16:19.036 "num_base_bdevs": 4, 00:16:19.036 "num_base_bdevs_discovered": 3, 00:16:19.036 "num_base_bdevs_operational": 4, 00:16:19.036 "base_bdevs_list": [ 00:16:19.036 { 00:16:19.036 "name": "BaseBdev1", 00:16:19.036 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:19.036 "is_configured": true, 00:16:19.036 "data_offset": 2048, 00:16:19.036 "data_size": 63488 00:16:19.036 }, 00:16:19.036 { 00:16:19.036 "name": null, 00:16:19.036 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:19.036 "is_configured": false, 00:16:19.036 "data_offset": 0, 00:16:19.036 "data_size": 63488 00:16:19.036 }, 00:16:19.036 { 00:16:19.036 "name": "BaseBdev3", 00:16:19.036 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:19.036 "is_configured": true, 00:16:19.036 "data_offset": 2048, 00:16:19.036 "data_size": 63488 00:16:19.036 }, 00:16:19.036 { 00:16:19.036 "name": "BaseBdev4", 00:16:19.036 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:19.036 "is_configured": true, 00:16:19.036 "data_offset": 2048, 00:16:19.036 "data_size": 63488 00:16:19.036 } 00:16:19.036 ] 00:16:19.036 }' 00:16:19.036 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.037 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.296 [2024-12-12 05:54:26.731799] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.296 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.297 "name": "Existed_Raid", 00:16:19.297 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:19.297 "strip_size_kb": 64, 00:16:19.297 "state": "configuring", 00:16:19.297 "raid_level": "raid5f", 00:16:19.297 "superblock": true, 00:16:19.297 "num_base_bdevs": 4, 00:16:19.297 "num_base_bdevs_discovered": 2, 00:16:19.297 "num_base_bdevs_operational": 4, 00:16:19.297 "base_bdevs_list": [ 00:16:19.297 { 00:16:19.297 "name": "BaseBdev1", 00:16:19.297 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:19.297 "is_configured": true, 00:16:19.297 "data_offset": 2048, 00:16:19.297 "data_size": 63488 00:16:19.297 }, 00:16:19.297 { 00:16:19.297 "name": null, 00:16:19.297 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:19.297 "is_configured": false, 00:16:19.297 "data_offset": 0, 00:16:19.297 "data_size": 63488 00:16:19.297 }, 00:16:19.297 { 00:16:19.297 "name": null, 00:16:19.297 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:19.297 "is_configured": false, 00:16:19.297 "data_offset": 0, 00:16:19.297 "data_size": 63488 00:16:19.297 }, 00:16:19.297 { 00:16:19.297 "name": "BaseBdev4", 00:16:19.297 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:19.297 "is_configured": true, 00:16:19.297 "data_offset": 2048, 00:16:19.297 "data_size": 63488 00:16:19.297 } 00:16:19.297 ] 00:16:19.297 }' 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.297 05:54:26 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.866 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:19.866 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.866 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.866 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.867 [2024-12-12 05:54:27.175043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:19.867 "name": "Existed_Raid", 00:16:19.867 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:19.867 "strip_size_kb": 64, 00:16:19.867 "state": "configuring", 00:16:19.867 "raid_level": "raid5f", 00:16:19.867 "superblock": true, 00:16:19.867 "num_base_bdevs": 4, 00:16:19.867 "num_base_bdevs_discovered": 3, 00:16:19.867 "num_base_bdevs_operational": 4, 00:16:19.867 "base_bdevs_list": [ 00:16:19.867 { 00:16:19.867 "name": "BaseBdev1", 00:16:19.867 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:19.867 "is_configured": true, 00:16:19.867 "data_offset": 2048, 00:16:19.867 "data_size": 63488 00:16:19.867 }, 00:16:19.867 { 00:16:19.867 "name": null, 00:16:19.867 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:19.867 "is_configured": false, 00:16:19.867 "data_offset": 0, 00:16:19.867 "data_size": 63488 00:16:19.867 }, 00:16:19.867 { 00:16:19.867 "name": "BaseBdev3", 00:16:19.867 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:19.867 "is_configured": true, 00:16:19.867 "data_offset": 2048, 00:16:19.867 "data_size": 63488 00:16:19.867 }, 00:16:19.867 { 00:16:19.867 "name": "BaseBdev4", 00:16:19.867 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:19.867 "is_configured": true, 00:16:19.867 "data_offset": 2048, 00:16:19.867 "data_size": 63488 00:16:19.867 } 00:16:19.867 ] 00:16:19.867 }' 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:19.867 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.127 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.127 [2024-12-12 05:54:27.622643] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.387 "name": "Existed_Raid", 00:16:20.387 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:20.387 "strip_size_kb": 64, 00:16:20.387 "state": "configuring", 00:16:20.387 "raid_level": "raid5f", 00:16:20.387 "superblock": true, 00:16:20.387 "num_base_bdevs": 4, 00:16:20.387 "num_base_bdevs_discovered": 2, 00:16:20.387 "num_base_bdevs_operational": 4, 00:16:20.387 "base_bdevs_list": [ 00:16:20.387 { 00:16:20.387 "name": null, 00:16:20.387 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:20.387 "is_configured": false, 00:16:20.387 "data_offset": 0, 00:16:20.387 "data_size": 63488 00:16:20.387 }, 00:16:20.387 { 00:16:20.387 "name": null, 00:16:20.387 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:20.387 "is_configured": false, 00:16:20.387 "data_offset": 0, 00:16:20.387 "data_size": 63488 00:16:20.387 }, 00:16:20.387 { 00:16:20.387 "name": "BaseBdev3", 00:16:20.387 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:20.387 "is_configured": true, 00:16:20.387 "data_offset": 2048, 00:16:20.387 "data_size": 63488 00:16:20.387 }, 00:16:20.387 { 00:16:20.387 "name": "BaseBdev4", 00:16:20.387 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:20.387 "is_configured": true, 00:16:20.387 "data_offset": 2048, 00:16:20.387 "data_size": 63488 00:16:20.387 } 00:16:20.387 ] 00:16:20.387 }' 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.387 05:54:27 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.648 [2024-12-12 05:54:28.153008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.648 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.908 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.908 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.908 "name": "Existed_Raid", 00:16:20.908 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:20.908 "strip_size_kb": 64, 00:16:20.908 "state": "configuring", 00:16:20.908 "raid_level": "raid5f", 00:16:20.908 "superblock": true, 00:16:20.908 "num_base_bdevs": 4, 00:16:20.908 "num_base_bdevs_discovered": 3, 00:16:20.908 "num_base_bdevs_operational": 4, 00:16:20.908 "base_bdevs_list": [ 00:16:20.908 { 00:16:20.908 "name": null, 00:16:20.908 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:20.908 "is_configured": false, 00:16:20.908 "data_offset": 0, 00:16:20.908 "data_size": 63488 00:16:20.908 }, 00:16:20.908 { 00:16:20.908 "name": "BaseBdev2", 00:16:20.908 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:20.908 "is_configured": true, 00:16:20.908 "data_offset": 2048, 00:16:20.908 "data_size": 63488 00:16:20.908 }, 00:16:20.908 { 00:16:20.908 "name": "BaseBdev3", 00:16:20.908 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:20.908 "is_configured": true, 00:16:20.908 "data_offset": 2048, 00:16:20.908 "data_size": 63488 00:16:20.908 }, 00:16:20.908 { 00:16:20.908 "name": "BaseBdev4", 00:16:20.908 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:20.908 "is_configured": true, 00:16:20.908 "data_offset": 2048, 00:16:20.908 "data_size": 63488 00:16:20.908 } 00:16:20.908 ] 00:16:20.908 }' 00:16:20.908 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.908 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u be4e01be-c829-4a5e-89c5-d65fdecde009 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 [2024-12-12 05:54:28.663819] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:21.169 [2024-12-12 05:54:28.664070] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:21.169 [2024-12-12 05:54:28.664083] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:21.169 [2024-12-12 05:54:28.664364] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:21.169 NewBaseBdev 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.169 [2024-12-12 05:54:28.671492] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:21.169 [2024-12-12 05:54:28.671531] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:21.169 [2024-12-12 05:54:28.671774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.169 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.429 [ 00:16:21.429 { 00:16:21.429 "name": "NewBaseBdev", 00:16:21.429 "aliases": [ 00:16:21.429 "be4e01be-c829-4a5e-89c5-d65fdecde009" 00:16:21.429 ], 00:16:21.429 "product_name": "Malloc disk", 00:16:21.429 "block_size": 512, 00:16:21.429 "num_blocks": 65536, 00:16:21.429 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:21.429 "assigned_rate_limits": { 00:16:21.429 "rw_ios_per_sec": 0, 00:16:21.429 "rw_mbytes_per_sec": 0, 00:16:21.429 "r_mbytes_per_sec": 0, 00:16:21.429 "w_mbytes_per_sec": 0 00:16:21.429 }, 00:16:21.429 "claimed": true, 00:16:21.429 "claim_type": "exclusive_write", 00:16:21.429 "zoned": false, 00:16:21.429 "supported_io_types": { 00:16:21.429 "read": true, 00:16:21.429 "write": true, 00:16:21.429 "unmap": true, 00:16:21.429 "flush": true, 00:16:21.429 "reset": true, 00:16:21.429 "nvme_admin": false, 00:16:21.429 "nvme_io": false, 00:16:21.429 "nvme_io_md": false, 00:16:21.429 "write_zeroes": true, 00:16:21.429 "zcopy": true, 00:16:21.429 "get_zone_info": false, 00:16:21.429 "zone_management": false, 00:16:21.429 "zone_append": false, 00:16:21.429 "compare": false, 00:16:21.429 "compare_and_write": false, 00:16:21.429 "abort": true, 00:16:21.429 "seek_hole": false, 00:16:21.429 "seek_data": false, 00:16:21.429 "copy": true, 00:16:21.429 "nvme_iov_md": false 00:16:21.429 }, 00:16:21.429 "memory_domains": [ 00:16:21.429 { 00:16:21.429 "dma_device_id": "system", 00:16:21.429 "dma_device_type": 1 00:16:21.429 }, 00:16:21.429 { 00:16:21.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.429 "dma_device_type": 2 00:16:21.429 } 00:16:21.429 ], 00:16:21.429 "driver_specific": {} 00:16:21.429 } 00:16:21.429 ] 00:16:21.429 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.429 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.430 "name": "Existed_Raid", 00:16:21.430 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:21.430 "strip_size_kb": 64, 00:16:21.430 "state": "online", 00:16:21.430 "raid_level": "raid5f", 00:16:21.430 "superblock": true, 00:16:21.430 "num_base_bdevs": 4, 00:16:21.430 "num_base_bdevs_discovered": 4, 00:16:21.430 "num_base_bdevs_operational": 4, 00:16:21.430 "base_bdevs_list": [ 00:16:21.430 { 00:16:21.430 "name": "NewBaseBdev", 00:16:21.430 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:21.430 "is_configured": true, 00:16:21.430 "data_offset": 2048, 00:16:21.430 "data_size": 63488 00:16:21.430 }, 00:16:21.430 { 00:16:21.430 "name": "BaseBdev2", 00:16:21.430 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:21.430 "is_configured": true, 00:16:21.430 "data_offset": 2048, 00:16:21.430 "data_size": 63488 00:16:21.430 }, 00:16:21.430 { 00:16:21.430 "name": "BaseBdev3", 00:16:21.430 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:21.430 "is_configured": true, 00:16:21.430 "data_offset": 2048, 00:16:21.430 "data_size": 63488 00:16:21.430 }, 00:16:21.430 { 00:16:21.430 "name": "BaseBdev4", 00:16:21.430 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:21.430 "is_configured": true, 00:16:21.430 "data_offset": 2048, 00:16:21.430 "data_size": 63488 00:16:21.430 } 00:16:21.430 ] 00:16:21.430 }' 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.430 05:54:28 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.690 [2024-12-12 05:54:29.127091] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.690 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:21.690 "name": "Existed_Raid", 00:16:21.690 "aliases": [ 00:16:21.690 "49b1149f-f10e-476b-96aa-fefe8e40c6c9" 00:16:21.690 ], 00:16:21.690 "product_name": "Raid Volume", 00:16:21.690 "block_size": 512, 00:16:21.690 "num_blocks": 190464, 00:16:21.690 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:21.690 "assigned_rate_limits": { 00:16:21.690 "rw_ios_per_sec": 0, 00:16:21.690 "rw_mbytes_per_sec": 0, 00:16:21.690 "r_mbytes_per_sec": 0, 00:16:21.690 "w_mbytes_per_sec": 0 00:16:21.690 }, 00:16:21.690 "claimed": false, 00:16:21.690 "zoned": false, 00:16:21.690 "supported_io_types": { 00:16:21.690 "read": true, 00:16:21.690 "write": true, 00:16:21.690 "unmap": false, 00:16:21.690 "flush": false, 00:16:21.690 "reset": true, 00:16:21.690 "nvme_admin": false, 00:16:21.690 "nvme_io": false, 00:16:21.690 "nvme_io_md": false, 00:16:21.690 "write_zeroes": true, 00:16:21.690 "zcopy": false, 00:16:21.690 "get_zone_info": false, 00:16:21.690 "zone_management": false, 00:16:21.690 "zone_append": false, 00:16:21.690 "compare": false, 00:16:21.690 "compare_and_write": false, 00:16:21.690 "abort": false, 00:16:21.690 "seek_hole": false, 00:16:21.690 "seek_data": false, 00:16:21.690 "copy": false, 00:16:21.690 "nvme_iov_md": false 00:16:21.690 }, 00:16:21.690 "driver_specific": { 00:16:21.690 "raid": { 00:16:21.690 "uuid": "49b1149f-f10e-476b-96aa-fefe8e40c6c9", 00:16:21.691 "strip_size_kb": 64, 00:16:21.691 "state": "online", 00:16:21.691 "raid_level": "raid5f", 00:16:21.691 "superblock": true, 00:16:21.691 "num_base_bdevs": 4, 00:16:21.691 "num_base_bdevs_discovered": 4, 00:16:21.691 "num_base_bdevs_operational": 4, 00:16:21.691 "base_bdevs_list": [ 00:16:21.691 { 00:16:21.691 "name": "NewBaseBdev", 00:16:21.691 "uuid": "be4e01be-c829-4a5e-89c5-d65fdecde009", 00:16:21.691 "is_configured": true, 00:16:21.691 "data_offset": 2048, 00:16:21.691 "data_size": 63488 00:16:21.691 }, 00:16:21.691 { 00:16:21.691 "name": "BaseBdev2", 00:16:21.691 "uuid": "c4245544-0812-49f4-8559-a7c26744f828", 00:16:21.691 "is_configured": true, 00:16:21.691 "data_offset": 2048, 00:16:21.691 "data_size": 63488 00:16:21.691 }, 00:16:21.691 { 00:16:21.691 "name": "BaseBdev3", 00:16:21.691 "uuid": "17200211-34cc-4c41-b704-5636fa5b31a8", 00:16:21.691 "is_configured": true, 00:16:21.691 "data_offset": 2048, 00:16:21.691 "data_size": 63488 00:16:21.691 }, 00:16:21.691 { 00:16:21.691 "name": "BaseBdev4", 00:16:21.691 "uuid": "a701f8f4-4f29-4fad-856a-ade0820edb31", 00:16:21.691 "is_configured": true, 00:16:21.691 "data_offset": 2048, 00:16:21.691 "data_size": 63488 00:16:21.691 } 00:16:21.691 ] 00:16:21.691 } 00:16:21.691 } 00:16:21.691 }' 00:16:21.691 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:21.691 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:21.691 BaseBdev2 00:16:21.691 BaseBdev3 00:16:21.691 BaseBdev4' 00:16:21.691 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:21.951 [2024-12-12 05:54:29.446558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:21.951 [2024-12-12 05:54:29.446591] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.951 [2024-12-12 05:54:29.446660] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.951 [2024-12-12 05:54:29.446945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.951 [2024-12-12 05:54:29.446974] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83359 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83359 ']' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83359 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:21.951 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83359 00:16:22.211 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:22.211 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:22.211 killing process with pid 83359 00:16:22.212 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83359' 00:16:22.212 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83359 00:16:22.212 [2024-12-12 05:54:29.493101] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.212 05:54:29 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83359 00:16:22.472 [2024-12-12 05:54:29.866785] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:23.410 05:54:30 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:23.410 00:16:23.410 real 0m10.869s 00:16:23.410 user 0m17.208s 00:16:23.410 sys 0m1.992s 00:16:23.410 05:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.410 05:54:30 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:23.410 ************************************ 00:16:23.410 END TEST raid5f_state_function_test_sb 00:16:23.410 ************************************ 00:16:23.670 05:54:30 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:16:23.670 05:54:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:23.670 05:54:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.670 05:54:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:23.670 ************************************ 00:16:23.670 START TEST raid5f_superblock_test 00:16:23.670 ************************************ 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=83957 00:16:23.670 05:54:30 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 83957 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 83957 ']' 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.670 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.670 [2024-12-12 05:54:31.083461] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:16:23.670 [2024-12-12 05:54:31.083595] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83957 ] 00:16:23.930 [2024-12-12 05:54:31.253021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:23.930 [2024-12-12 05:54:31.349707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.189 [2024-12-12 05:54:31.544972] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.189 [2024-12-12 05:54:31.545030] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.449 malloc1 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.449 [2024-12-12 05:54:31.935068] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:24.449 [2024-12-12 05:54:31.935123] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.449 [2024-12-12 05:54:31.935143] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:24.449 [2024-12-12 05:54:31.935151] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.449 [2024-12-12 05:54:31.937127] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.449 [2024-12-12 05:54:31.937161] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:24.449 pt1 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.449 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 malloc2 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 [2024-12-12 05:54:31.988314] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:24.709 [2024-12-12 05:54:31.988377] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.709 [2024-12-12 05:54:31.988398] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:24.709 [2024-12-12 05:54:31.988406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.709 [2024-12-12 05:54:31.990333] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.709 [2024-12-12 05:54:31.990366] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:24.709 pt2 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:31 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 malloc3 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 [2024-12-12 05:54:32.071983] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:24.709 [2024-12-12 05:54:32.072030] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.709 [2024-12-12 05:54:32.072049] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:24.709 [2024-12-12 05:54:32.072057] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.709 [2024-12-12 05:54:32.074119] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.709 [2024-12-12 05:54:32.074152] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:24.709 pt3 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 malloc4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 [2024-12-12 05:54:32.124291] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:24.709 [2024-12-12 05:54:32.124342] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:24.709 [2024-12-12 05:54:32.124360] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:24.709 [2024-12-12 05:54:32.124369] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:24.709 [2024-12-12 05:54:32.126565] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:24.709 [2024-12-12 05:54:32.126598] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:24.709 pt4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.709 [2024-12-12 05:54:32.136300] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:24.709 [2024-12-12 05:54:32.138051] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:24.709 [2024-12-12 05:54:32.138138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:24.709 [2024-12-12 05:54:32.138185] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:24.709 [2024-12-12 05:54:32.138463] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:24.709 [2024-12-12 05:54:32.138508] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:24.709 [2024-12-12 05:54:32.138763] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:24.709 [2024-12-12 05:54:32.145421] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:24.709 [2024-12-12 05:54:32.145450] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:24.709 [2024-12-12 05:54:32.145637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:24.709 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:24.710 "name": "raid_bdev1", 00:16:24.710 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:24.710 "strip_size_kb": 64, 00:16:24.710 "state": "online", 00:16:24.710 "raid_level": "raid5f", 00:16:24.710 "superblock": true, 00:16:24.710 "num_base_bdevs": 4, 00:16:24.710 "num_base_bdevs_discovered": 4, 00:16:24.710 "num_base_bdevs_operational": 4, 00:16:24.710 "base_bdevs_list": [ 00:16:24.710 { 00:16:24.710 "name": "pt1", 00:16:24.710 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.710 "is_configured": true, 00:16:24.710 "data_offset": 2048, 00:16:24.710 "data_size": 63488 00:16:24.710 }, 00:16:24.710 { 00:16:24.710 "name": "pt2", 00:16:24.710 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.710 "is_configured": true, 00:16:24.710 "data_offset": 2048, 00:16:24.710 "data_size": 63488 00:16:24.710 }, 00:16:24.710 { 00:16:24.710 "name": "pt3", 00:16:24.710 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.710 "is_configured": true, 00:16:24.710 "data_offset": 2048, 00:16:24.710 "data_size": 63488 00:16:24.710 }, 00:16:24.710 { 00:16:24.710 "name": "pt4", 00:16:24.710 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.710 "is_configured": true, 00:16:24.710 "data_offset": 2048, 00:16:24.710 "data_size": 63488 00:16:24.710 } 00:16:24.710 ] 00:16:24.710 }' 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:24.710 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.278 [2024-12-12 05:54:32.589613] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.278 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:25.278 "name": "raid_bdev1", 00:16:25.278 "aliases": [ 00:16:25.278 "de30dc11-0166-457d-a0da-a12fdcc7522d" 00:16:25.278 ], 00:16:25.278 "product_name": "Raid Volume", 00:16:25.278 "block_size": 512, 00:16:25.278 "num_blocks": 190464, 00:16:25.278 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:25.278 "assigned_rate_limits": { 00:16:25.278 "rw_ios_per_sec": 0, 00:16:25.278 "rw_mbytes_per_sec": 0, 00:16:25.278 "r_mbytes_per_sec": 0, 00:16:25.278 "w_mbytes_per_sec": 0 00:16:25.278 }, 00:16:25.278 "claimed": false, 00:16:25.278 "zoned": false, 00:16:25.279 "supported_io_types": { 00:16:25.279 "read": true, 00:16:25.279 "write": true, 00:16:25.279 "unmap": false, 00:16:25.279 "flush": false, 00:16:25.279 "reset": true, 00:16:25.279 "nvme_admin": false, 00:16:25.279 "nvme_io": false, 00:16:25.279 "nvme_io_md": false, 00:16:25.279 "write_zeroes": true, 00:16:25.279 "zcopy": false, 00:16:25.279 "get_zone_info": false, 00:16:25.279 "zone_management": false, 00:16:25.279 "zone_append": false, 00:16:25.279 "compare": false, 00:16:25.279 "compare_and_write": false, 00:16:25.279 "abort": false, 00:16:25.279 "seek_hole": false, 00:16:25.279 "seek_data": false, 00:16:25.279 "copy": false, 00:16:25.279 "nvme_iov_md": false 00:16:25.279 }, 00:16:25.279 "driver_specific": { 00:16:25.279 "raid": { 00:16:25.279 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:25.279 "strip_size_kb": 64, 00:16:25.279 "state": "online", 00:16:25.279 "raid_level": "raid5f", 00:16:25.279 "superblock": true, 00:16:25.279 "num_base_bdevs": 4, 00:16:25.279 "num_base_bdevs_discovered": 4, 00:16:25.279 "num_base_bdevs_operational": 4, 00:16:25.279 "base_bdevs_list": [ 00:16:25.279 { 00:16:25.279 "name": "pt1", 00:16:25.279 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.279 "is_configured": true, 00:16:25.279 "data_offset": 2048, 00:16:25.279 "data_size": 63488 00:16:25.279 }, 00:16:25.279 { 00:16:25.279 "name": "pt2", 00:16:25.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.279 "is_configured": true, 00:16:25.279 "data_offset": 2048, 00:16:25.279 "data_size": 63488 00:16:25.279 }, 00:16:25.279 { 00:16:25.279 "name": "pt3", 00:16:25.279 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.279 "is_configured": true, 00:16:25.279 "data_offset": 2048, 00:16:25.279 "data_size": 63488 00:16:25.279 }, 00:16:25.279 { 00:16:25.279 "name": "pt4", 00:16:25.279 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.279 "is_configured": true, 00:16:25.279 "data_offset": 2048, 00:16:25.279 "data_size": 63488 00:16:25.279 } 00:16:25.279 ] 00:16:25.279 } 00:16:25.279 } 00:16:25.279 }' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:25.279 pt2 00:16:25.279 pt3 00:16:25.279 pt4' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.279 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:25.539 [2024-12-12 05:54:32.889040] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=de30dc11-0166-457d-a0da-a12fdcc7522d 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z de30dc11-0166-457d-a0da-a12fdcc7522d ']' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 [2024-12-12 05:54:32.940789] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.539 [2024-12-12 05:54:32.940817] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:25.539 [2024-12-12 05:54:32.940885] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:25.539 [2024-12-12 05:54:32.940965] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:25.539 [2024-12-12 05:54:32.940983] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:32 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.539 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.800 [2024-12-12 05:54:33.084594] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:25.800 [2024-12-12 05:54:33.086326] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:25.800 [2024-12-12 05:54:33.086374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:25.800 [2024-12-12 05:54:33.086406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:25.800 [2024-12-12 05:54:33.086450] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:25.800 [2024-12-12 05:54:33.086496] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:25.800 [2024-12-12 05:54:33.086525] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:25.800 [2024-12-12 05:54:33.086542] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:25.800 [2024-12-12 05:54:33.086556] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:25.800 [2024-12-12 05:54:33.086566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:25.800 request: 00:16:25.800 { 00:16:25.800 "name": "raid_bdev1", 00:16:25.800 "raid_level": "raid5f", 00:16:25.800 "base_bdevs": [ 00:16:25.800 "malloc1", 00:16:25.800 "malloc2", 00:16:25.800 "malloc3", 00:16:25.800 "malloc4" 00:16:25.800 ], 00:16:25.800 "strip_size_kb": 64, 00:16:25.800 "superblock": false, 00:16:25.800 "method": "bdev_raid_create", 00:16:25.800 "req_id": 1 00:16:25.800 } 00:16:25.800 Got JSON-RPC error response 00:16:25.800 response: 00:16:25.800 { 00:16:25.800 "code": -17, 00:16:25.800 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:25.800 } 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.800 [2024-12-12 05:54:33.148443] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:25.800 [2024-12-12 05:54:33.148489] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.800 [2024-12-12 05:54:33.148513] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:25.800 [2024-12-12 05:54:33.148524] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.800 [2024-12-12 05:54:33.150536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.800 [2024-12-12 05:54:33.150572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:25.800 [2024-12-12 05:54:33.150635] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:25.800 [2024-12-12 05:54:33.150691] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:25.800 pt1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:25.800 "name": "raid_bdev1", 00:16:25.800 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:25.800 "strip_size_kb": 64, 00:16:25.800 "state": "configuring", 00:16:25.800 "raid_level": "raid5f", 00:16:25.800 "superblock": true, 00:16:25.800 "num_base_bdevs": 4, 00:16:25.800 "num_base_bdevs_discovered": 1, 00:16:25.800 "num_base_bdevs_operational": 4, 00:16:25.800 "base_bdevs_list": [ 00:16:25.800 { 00:16:25.800 "name": "pt1", 00:16:25.800 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.800 "is_configured": true, 00:16:25.800 "data_offset": 2048, 00:16:25.800 "data_size": 63488 00:16:25.800 }, 00:16:25.800 { 00:16:25.800 "name": null, 00:16:25.800 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.800 "is_configured": false, 00:16:25.800 "data_offset": 2048, 00:16:25.800 "data_size": 63488 00:16:25.800 }, 00:16:25.800 { 00:16:25.800 "name": null, 00:16:25.800 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.800 "is_configured": false, 00:16:25.800 "data_offset": 2048, 00:16:25.800 "data_size": 63488 00:16:25.800 }, 00:16:25.800 { 00:16:25.800 "name": null, 00:16:25.800 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.800 "is_configured": false, 00:16:25.800 "data_offset": 2048, 00:16:25.800 "data_size": 63488 00:16:25.800 } 00:16:25.800 ] 00:16:25.800 }' 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:25.800 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.371 [2024-12-12 05:54:33.607699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.371 [2024-12-12 05:54:33.607763] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.371 [2024-12-12 05:54:33.607780] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:26.371 [2024-12-12 05:54:33.607790] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.371 [2024-12-12 05:54:33.608197] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.371 [2024-12-12 05:54:33.608217] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.371 [2024-12-12 05:54:33.608292] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:26.371 [2024-12-12 05:54:33.608314] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.371 pt2 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.371 [2024-12-12 05:54:33.619690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.371 "name": "raid_bdev1", 00:16:26.371 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:26.371 "strip_size_kb": 64, 00:16:26.371 "state": "configuring", 00:16:26.371 "raid_level": "raid5f", 00:16:26.371 "superblock": true, 00:16:26.371 "num_base_bdevs": 4, 00:16:26.371 "num_base_bdevs_discovered": 1, 00:16:26.371 "num_base_bdevs_operational": 4, 00:16:26.371 "base_bdevs_list": [ 00:16:26.371 { 00:16:26.371 "name": "pt1", 00:16:26.371 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.371 "is_configured": true, 00:16:26.371 "data_offset": 2048, 00:16:26.371 "data_size": 63488 00:16:26.371 }, 00:16:26.371 { 00:16:26.371 "name": null, 00:16:26.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.371 "is_configured": false, 00:16:26.371 "data_offset": 0, 00:16:26.371 "data_size": 63488 00:16:26.371 }, 00:16:26.371 { 00:16:26.371 "name": null, 00:16:26.371 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.371 "is_configured": false, 00:16:26.371 "data_offset": 2048, 00:16:26.371 "data_size": 63488 00:16:26.371 }, 00:16:26.371 { 00:16:26.371 "name": null, 00:16:26.371 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.371 "is_configured": false, 00:16:26.371 "data_offset": 2048, 00:16:26.371 "data_size": 63488 00:16:26.371 } 00:16:26.371 ] 00:16:26.371 }' 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.371 05:54:33 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 [2024-12-12 05:54:34.058908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.632 [2024-12-12 05:54:34.058958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.632 [2024-12-12 05:54:34.058976] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:26.632 [2024-12-12 05:54:34.058985] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.632 [2024-12-12 05:54:34.059395] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.632 [2024-12-12 05:54:34.059415] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.632 [2024-12-12 05:54:34.059492] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:26.632 [2024-12-12 05:54:34.059539] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:26.632 pt2 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 [2024-12-12 05:54:34.066885] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:26.632 [2024-12-12 05:54:34.066929] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.632 [2024-12-12 05:54:34.066945] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:26.632 [2024-12-12 05:54:34.066953] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.632 [2024-12-12 05:54:34.067276] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.632 [2024-12-12 05:54:34.067291] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:26.632 [2024-12-12 05:54:34.067346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:26.632 [2024-12-12 05:54:34.067368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:26.632 pt3 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 [2024-12-12 05:54:34.074853] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:26.632 [2024-12-12 05:54:34.074892] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.632 [2024-12-12 05:54:34.074912] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:16:26.632 [2024-12-12 05:54:34.074919] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.632 [2024-12-12 05:54:34.075268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.632 [2024-12-12 05:54:34.075290] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:26.632 [2024-12-12 05:54:34.075346] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:26.632 [2024-12-12 05:54:34.075364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:26.632 [2024-12-12 05:54:34.075514] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:26.632 [2024-12-12 05:54:34.075528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:26.632 [2024-12-12 05:54:34.075756] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:26.632 [2024-12-12 05:54:34.082826] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:26.632 [2024-12-12 05:54:34.082854] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:26.632 [2024-12-12 05:54:34.083006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.632 pt4 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.632 "name": "raid_bdev1", 00:16:26.632 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:26.632 "strip_size_kb": 64, 00:16:26.632 "state": "online", 00:16:26.632 "raid_level": "raid5f", 00:16:26.632 "superblock": true, 00:16:26.632 "num_base_bdevs": 4, 00:16:26.632 "num_base_bdevs_discovered": 4, 00:16:26.632 "num_base_bdevs_operational": 4, 00:16:26.632 "base_bdevs_list": [ 00:16:26.632 { 00:16:26.632 "name": "pt1", 00:16:26.632 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:26.632 "is_configured": true, 00:16:26.632 "data_offset": 2048, 00:16:26.632 "data_size": 63488 00:16:26.632 }, 00:16:26.632 { 00:16:26.632 "name": "pt2", 00:16:26.632 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.632 "is_configured": true, 00:16:26.632 "data_offset": 2048, 00:16:26.632 "data_size": 63488 00:16:26.632 }, 00:16:26.632 { 00:16:26.632 "name": "pt3", 00:16:26.632 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.632 "is_configured": true, 00:16:26.632 "data_offset": 2048, 00:16:26.632 "data_size": 63488 00:16:26.632 }, 00:16:26.632 { 00:16:26.632 "name": "pt4", 00:16:26.632 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.632 "is_configured": true, 00:16:26.632 "data_offset": 2048, 00:16:26.632 "data_size": 63488 00:16:26.632 } 00:16:26.632 ] 00:16:26.632 }' 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.632 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.202 [2024-12-12 05:54:34.538708] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.202 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:27.202 "name": "raid_bdev1", 00:16:27.202 "aliases": [ 00:16:27.202 "de30dc11-0166-457d-a0da-a12fdcc7522d" 00:16:27.202 ], 00:16:27.202 "product_name": "Raid Volume", 00:16:27.202 "block_size": 512, 00:16:27.202 "num_blocks": 190464, 00:16:27.202 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:27.202 "assigned_rate_limits": { 00:16:27.202 "rw_ios_per_sec": 0, 00:16:27.202 "rw_mbytes_per_sec": 0, 00:16:27.202 "r_mbytes_per_sec": 0, 00:16:27.202 "w_mbytes_per_sec": 0 00:16:27.202 }, 00:16:27.202 "claimed": false, 00:16:27.202 "zoned": false, 00:16:27.202 "supported_io_types": { 00:16:27.202 "read": true, 00:16:27.202 "write": true, 00:16:27.202 "unmap": false, 00:16:27.202 "flush": false, 00:16:27.202 "reset": true, 00:16:27.202 "nvme_admin": false, 00:16:27.202 "nvme_io": false, 00:16:27.202 "nvme_io_md": false, 00:16:27.202 "write_zeroes": true, 00:16:27.202 "zcopy": false, 00:16:27.202 "get_zone_info": false, 00:16:27.203 "zone_management": false, 00:16:27.203 "zone_append": false, 00:16:27.203 "compare": false, 00:16:27.203 "compare_and_write": false, 00:16:27.203 "abort": false, 00:16:27.203 "seek_hole": false, 00:16:27.203 "seek_data": false, 00:16:27.203 "copy": false, 00:16:27.203 "nvme_iov_md": false 00:16:27.203 }, 00:16:27.203 "driver_specific": { 00:16:27.203 "raid": { 00:16:27.203 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:27.203 "strip_size_kb": 64, 00:16:27.203 "state": "online", 00:16:27.203 "raid_level": "raid5f", 00:16:27.203 "superblock": true, 00:16:27.203 "num_base_bdevs": 4, 00:16:27.203 "num_base_bdevs_discovered": 4, 00:16:27.203 "num_base_bdevs_operational": 4, 00:16:27.203 "base_bdevs_list": [ 00:16:27.203 { 00:16:27.203 "name": "pt1", 00:16:27.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:27.203 "is_configured": true, 00:16:27.203 "data_offset": 2048, 00:16:27.203 "data_size": 63488 00:16:27.203 }, 00:16:27.203 { 00:16:27.203 "name": "pt2", 00:16:27.203 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.203 "is_configured": true, 00:16:27.203 "data_offset": 2048, 00:16:27.203 "data_size": 63488 00:16:27.203 }, 00:16:27.203 { 00:16:27.203 "name": "pt3", 00:16:27.203 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.203 "is_configured": true, 00:16:27.203 "data_offset": 2048, 00:16:27.203 "data_size": 63488 00:16:27.203 }, 00:16:27.203 { 00:16:27.203 "name": "pt4", 00:16:27.203 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.203 "is_configured": true, 00:16:27.203 "data_offset": 2048, 00:16:27.203 "data_size": 63488 00:16:27.203 } 00:16:27.203 ] 00:16:27.203 } 00:16:27.203 } 00:16:27.203 }' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:27.203 pt2 00:16:27.203 pt3 00:16:27.203 pt4' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.203 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 [2024-12-12 05:54:34.846096] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' de30dc11-0166-457d-a0da-a12fdcc7522d '!=' de30dc11-0166-457d-a0da-a12fdcc7522d ']' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 [2024-12-12 05:54:34.889906] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.463 "name": "raid_bdev1", 00:16:27.463 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:27.463 "strip_size_kb": 64, 00:16:27.463 "state": "online", 00:16:27.463 "raid_level": "raid5f", 00:16:27.463 "superblock": true, 00:16:27.463 "num_base_bdevs": 4, 00:16:27.463 "num_base_bdevs_discovered": 3, 00:16:27.463 "num_base_bdevs_operational": 3, 00:16:27.463 "base_bdevs_list": [ 00:16:27.463 { 00:16:27.463 "name": null, 00:16:27.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.463 "is_configured": false, 00:16:27.463 "data_offset": 0, 00:16:27.463 "data_size": 63488 00:16:27.463 }, 00:16:27.463 { 00:16:27.463 "name": "pt2", 00:16:27.463 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:27.463 "is_configured": true, 00:16:27.463 "data_offset": 2048, 00:16:27.463 "data_size": 63488 00:16:27.463 }, 00:16:27.463 { 00:16:27.463 "name": "pt3", 00:16:27.463 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:27.463 "is_configured": true, 00:16:27.463 "data_offset": 2048, 00:16:27.463 "data_size": 63488 00:16:27.463 }, 00:16:27.463 { 00:16:27.463 "name": "pt4", 00:16:27.463 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:27.463 "is_configured": true, 00:16:27.463 "data_offset": 2048, 00:16:27.463 "data_size": 63488 00:16:27.463 } 00:16:27.463 ] 00:16:27.463 }' 00:16:27.463 05:54:34 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.464 05:54:34 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 [2024-12-12 05:54:35.345082] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:28.036 [2024-12-12 05:54:35.345112] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:28.036 [2024-12-12 05:54:35.345179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:28.036 [2024-12-12 05:54:35.345272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:28.036 [2024-12-12 05:54:35.345286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:28.036 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.037 [2024-12-12 05:54:35.440908] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.037 [2024-12-12 05:54:35.440967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.037 [2024-12-12 05:54:35.440983] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:28.037 [2024-12-12 05:54:35.440991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.037 [2024-12-12 05:54:35.443044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.037 [2024-12-12 05:54:35.443078] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.037 [2024-12-12 05:54:35.443151] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.037 [2024-12-12 05:54:35.443196] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.037 pt2 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.037 "name": "raid_bdev1", 00:16:28.037 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:28.037 "strip_size_kb": 64, 00:16:28.037 "state": "configuring", 00:16:28.037 "raid_level": "raid5f", 00:16:28.037 "superblock": true, 00:16:28.037 "num_base_bdevs": 4, 00:16:28.037 "num_base_bdevs_discovered": 1, 00:16:28.037 "num_base_bdevs_operational": 3, 00:16:28.037 "base_bdevs_list": [ 00:16:28.037 { 00:16:28.037 "name": null, 00:16:28.037 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.037 "is_configured": false, 00:16:28.037 "data_offset": 2048, 00:16:28.037 "data_size": 63488 00:16:28.037 }, 00:16:28.037 { 00:16:28.037 "name": "pt2", 00:16:28.037 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.037 "is_configured": true, 00:16:28.037 "data_offset": 2048, 00:16:28.037 "data_size": 63488 00:16:28.037 }, 00:16:28.037 { 00:16:28.037 "name": null, 00:16:28.037 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.037 "is_configured": false, 00:16:28.037 "data_offset": 2048, 00:16:28.037 "data_size": 63488 00:16:28.037 }, 00:16:28.037 { 00:16:28.037 "name": null, 00:16:28.037 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.037 "is_configured": false, 00:16:28.037 "data_offset": 2048, 00:16:28.037 "data_size": 63488 00:16:28.037 } 00:16:28.037 ] 00:16:28.037 }' 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.037 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.607 [2024-12-12 05:54:35.872188] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:28.607 [2024-12-12 05:54:35.872259] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.607 [2024-12-12 05:54:35.872283] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:28.607 [2024-12-12 05:54:35.872291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.607 [2024-12-12 05:54:35.872726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.607 [2024-12-12 05:54:35.872748] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:28.607 [2024-12-12 05:54:35.872830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:28.607 [2024-12-12 05:54:35.872850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:28.607 pt3 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.607 "name": "raid_bdev1", 00:16:28.607 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:28.607 "strip_size_kb": 64, 00:16:28.607 "state": "configuring", 00:16:28.607 "raid_level": "raid5f", 00:16:28.607 "superblock": true, 00:16:28.607 "num_base_bdevs": 4, 00:16:28.607 "num_base_bdevs_discovered": 2, 00:16:28.607 "num_base_bdevs_operational": 3, 00:16:28.607 "base_bdevs_list": [ 00:16:28.607 { 00:16:28.607 "name": null, 00:16:28.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.607 "is_configured": false, 00:16:28.607 "data_offset": 2048, 00:16:28.607 "data_size": 63488 00:16:28.607 }, 00:16:28.607 { 00:16:28.607 "name": "pt2", 00:16:28.607 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.607 "is_configured": true, 00:16:28.607 "data_offset": 2048, 00:16:28.607 "data_size": 63488 00:16:28.607 }, 00:16:28.607 { 00:16:28.607 "name": "pt3", 00:16:28.607 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.607 "is_configured": true, 00:16:28.607 "data_offset": 2048, 00:16:28.607 "data_size": 63488 00:16:28.607 }, 00:16:28.607 { 00:16:28.607 "name": null, 00:16:28.607 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.607 "is_configured": false, 00:16:28.607 "data_offset": 2048, 00:16:28.607 "data_size": 63488 00:16:28.607 } 00:16:28.607 ] 00:16:28.607 }' 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.607 05:54:35 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.867 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:28.867 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:28.867 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:16:28.867 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:28.867 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.868 [2024-12-12 05:54:36.327418] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:28.868 [2024-12-12 05:54:36.327475] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.868 [2024-12-12 05:54:36.327497] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:28.868 [2024-12-12 05:54:36.327518] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.868 [2024-12-12 05:54:36.327942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.868 [2024-12-12 05:54:36.327960] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:28.868 [2024-12-12 05:54:36.328055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:28.868 [2024-12-12 05:54:36.328088] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:28.868 [2024-12-12 05:54:36.328217] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:28.868 [2024-12-12 05:54:36.328225] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:28.868 [2024-12-12 05:54:36.328463] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:28.868 [2024-12-12 05:54:36.335630] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:28.868 [2024-12-12 05:54:36.335664] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:28.868 [2024-12-12 05:54:36.335944] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:28.868 pt4 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:28.868 "name": "raid_bdev1", 00:16:28.868 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:28.868 "strip_size_kb": 64, 00:16:28.868 "state": "online", 00:16:28.868 "raid_level": "raid5f", 00:16:28.868 "superblock": true, 00:16:28.868 "num_base_bdevs": 4, 00:16:28.868 "num_base_bdevs_discovered": 3, 00:16:28.868 "num_base_bdevs_operational": 3, 00:16:28.868 "base_bdevs_list": [ 00:16:28.868 { 00:16:28.868 "name": null, 00:16:28.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.868 "is_configured": false, 00:16:28.868 "data_offset": 2048, 00:16:28.868 "data_size": 63488 00:16:28.868 }, 00:16:28.868 { 00:16:28.868 "name": "pt2", 00:16:28.868 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.868 "is_configured": true, 00:16:28.868 "data_offset": 2048, 00:16:28.868 "data_size": 63488 00:16:28.868 }, 00:16:28.868 { 00:16:28.868 "name": "pt3", 00:16:28.868 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.868 "is_configured": true, 00:16:28.868 "data_offset": 2048, 00:16:28.868 "data_size": 63488 00:16:28.868 }, 00:16:28.868 { 00:16:28.868 "name": "pt4", 00:16:28.868 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.868 "is_configured": true, 00:16:28.868 "data_offset": 2048, 00:16:28.868 "data_size": 63488 00:16:28.868 } 00:16:28.868 ] 00:16:28.868 }' 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:28.868 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 [2024-12-12 05:54:36.819918] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.441 [2024-12-12 05:54:36.819953] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:29.441 [2024-12-12 05:54:36.820035] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:29.441 [2024-12-12 05:54:36.820107] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:29.441 [2024-12-12 05:54:36.820118] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 [2024-12-12 05:54:36.891780] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:29.441 [2024-12-12 05:54:36.891843] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.441 [2024-12-12 05:54:36.891886] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:16:29.441 [2024-12-12 05:54:36.891897] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.441 [2024-12-12 05:54:36.894089] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.441 [2024-12-12 05:54:36.894131] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:29.441 [2024-12-12 05:54:36.894212] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:29.441 [2024-12-12 05:54:36.894256] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:29.441 [2024-12-12 05:54:36.894414] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:29.441 [2024-12-12 05:54:36.894447] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:29.441 [2024-12-12 05:54:36.894462] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:29.441 [2024-12-12 05:54:36.894557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:29.441 [2024-12-12 05:54:36.894683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.441 pt1 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.441 "name": "raid_bdev1", 00:16:29.441 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:29.441 "strip_size_kb": 64, 00:16:29.441 "state": "configuring", 00:16:29.441 "raid_level": "raid5f", 00:16:29.441 "superblock": true, 00:16:29.441 "num_base_bdevs": 4, 00:16:29.441 "num_base_bdevs_discovered": 2, 00:16:29.441 "num_base_bdevs_operational": 3, 00:16:29.441 "base_bdevs_list": [ 00:16:29.441 { 00:16:29.441 "name": null, 00:16:29.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.441 "is_configured": false, 00:16:29.441 "data_offset": 2048, 00:16:29.441 "data_size": 63488 00:16:29.441 }, 00:16:29.441 { 00:16:29.441 "name": "pt2", 00:16:29.441 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.441 "is_configured": true, 00:16:29.441 "data_offset": 2048, 00:16:29.441 "data_size": 63488 00:16:29.441 }, 00:16:29.441 { 00:16:29.441 "name": "pt3", 00:16:29.441 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.441 "is_configured": true, 00:16:29.441 "data_offset": 2048, 00:16:29.441 "data_size": 63488 00:16:29.441 }, 00:16:29.441 { 00:16:29.441 "name": null, 00:16:29.441 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.441 "is_configured": false, 00:16:29.441 "data_offset": 2048, 00:16:29.441 "data_size": 63488 00:16:29.441 } 00:16:29.441 ] 00:16:29.441 }' 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.441 05:54:36 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.021 [2024-12-12 05:54:37.355000] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:30.021 [2024-12-12 05:54:37.355078] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:30.021 [2024-12-12 05:54:37.355101] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:30.021 [2024-12-12 05:54:37.355111] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:30.021 [2024-12-12 05:54:37.355603] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:30.021 [2024-12-12 05:54:37.355631] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:30.021 [2024-12-12 05:54:37.355720] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:30.021 [2024-12-12 05:54:37.355741] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:30.021 [2024-12-12 05:54:37.355895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:30.021 [2024-12-12 05:54:37.355912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:30.021 [2024-12-12 05:54:37.356169] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:30.021 [2024-12-12 05:54:37.363481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:30.021 [2024-12-12 05:54:37.363526] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:30.021 [2024-12-12 05:54:37.363789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:30.021 pt4 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:30.021 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:30.022 "name": "raid_bdev1", 00:16:30.022 "uuid": "de30dc11-0166-457d-a0da-a12fdcc7522d", 00:16:30.022 "strip_size_kb": 64, 00:16:30.022 "state": "online", 00:16:30.022 "raid_level": "raid5f", 00:16:30.022 "superblock": true, 00:16:30.022 "num_base_bdevs": 4, 00:16:30.022 "num_base_bdevs_discovered": 3, 00:16:30.022 "num_base_bdevs_operational": 3, 00:16:30.022 "base_bdevs_list": [ 00:16:30.022 { 00:16:30.022 "name": null, 00:16:30.022 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.022 "is_configured": false, 00:16:30.022 "data_offset": 2048, 00:16:30.022 "data_size": 63488 00:16:30.022 }, 00:16:30.022 { 00:16:30.022 "name": "pt2", 00:16:30.022 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.022 "is_configured": true, 00:16:30.022 "data_offset": 2048, 00:16:30.022 "data_size": 63488 00:16:30.022 }, 00:16:30.022 { 00:16:30.022 "name": "pt3", 00:16:30.022 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.022 "is_configured": true, 00:16:30.022 "data_offset": 2048, 00:16:30.022 "data_size": 63488 00:16:30.022 }, 00:16:30.022 { 00:16:30.022 "name": "pt4", 00:16:30.022 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.022 "is_configured": true, 00:16:30.022 "data_offset": 2048, 00:16:30.022 "data_size": 63488 00:16:30.022 } 00:16:30.022 ] 00:16:30.022 }' 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:30.022 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.592 [2024-12-12 05:54:37.879914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' de30dc11-0166-457d-a0da-a12fdcc7522d '!=' de30dc11-0166-457d-a0da-a12fdcc7522d ']' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 83957 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 83957 ']' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 83957 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83957 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:30.592 killing process with pid 83957 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83957' 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 83957 00:16:30.592 [2024-12-12 05:54:37.964686] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.592 [2024-12-12 05:54:37.964788] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.592 05:54:37 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 83957 00:16:30.592 [2024-12-12 05:54:37.964881] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.592 [2024-12-12 05:54:37.964905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:30.853 [2024-12-12 05:54:38.337822] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:32.235 05:54:39 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:32.235 00:16:32.235 real 0m8.409s 00:16:32.235 user 0m13.337s 00:16:32.235 sys 0m1.527s 00:16:32.235 05:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.235 05:54:39 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.235 ************************************ 00:16:32.235 END TEST raid5f_superblock_test 00:16:32.235 ************************************ 00:16:32.235 05:54:39 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:32.235 05:54:39 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:16:32.235 05:54:39 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:32.235 05:54:39 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:32.235 05:54:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:32.235 ************************************ 00:16:32.235 START TEST raid5f_rebuild_test 00:16:32.235 ************************************ 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:32.235 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84394 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84394 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84394 ']' 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:32.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:32.236 05:54:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.236 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:32.236 Zero copy mechanism will not be used. 00:16:32.236 [2024-12-12 05:54:39.577934] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:16:32.236 [2024-12-12 05:54:39.578071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84394 ] 00:16:32.236 [2024-12-12 05:54:39.751181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.496 [2024-12-12 05:54:39.858022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.755 [2024-12-12 05:54:40.043621] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.755 [2024-12-12 05:54:40.043651] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.014 BaseBdev1_malloc 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.014 [2024-12-12 05:54:40.435155] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:33.014 [2024-12-12 05:54:40.435223] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.014 [2024-12-12 05:54:40.435260] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:33.014 [2024-12-12 05:54:40.435272] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.014 [2024-12-12 05:54:40.437289] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.014 [2024-12-12 05:54:40.437328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:33.014 BaseBdev1 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.014 BaseBdev2_malloc 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.014 [2024-12-12 05:54:40.491702] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:33.014 [2024-12-12 05:54:40.491762] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.014 [2024-12-12 05:54:40.491795] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:33.014 [2024-12-12 05:54:40.491807] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.014 [2024-12-12 05:54:40.493800] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.014 [2024-12-12 05:54:40.493835] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:33.014 BaseBdev2 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.014 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.274 BaseBdev3_malloc 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.274 [2024-12-12 05:54:40.581357] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:33.274 [2024-12-12 05:54:40.581429] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.274 [2024-12-12 05:54:40.581449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:33.274 [2024-12-12 05:54:40.581459] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.274 [2024-12-12 05:54:40.583471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.274 [2024-12-12 05:54:40.583522] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:33.274 BaseBdev3 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.274 BaseBdev4_malloc 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.274 [2024-12-12 05:54:40.634111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:33.274 [2024-12-12 05:54:40.634185] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.274 [2024-12-12 05:54:40.634204] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:33.274 [2024-12-12 05:54:40.634214] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.274 [2024-12-12 05:54:40.636196] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.274 [2024-12-12 05:54:40.636235] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:33.274 BaseBdev4 00:16:33.274 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.275 spare_malloc 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.275 spare_delay 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.275 [2024-12-12 05:54:40.699824] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:33.275 [2024-12-12 05:54:40.699903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.275 [2024-12-12 05:54:40.699919] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:33.275 [2024-12-12 05:54:40.699928] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.275 [2024-12-12 05:54:40.701945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.275 [2024-12-12 05:54:40.701994] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:33.275 spare 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.275 [2024-12-12 05:54:40.711854] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:33.275 [2024-12-12 05:54:40.713592] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:33.275 [2024-12-12 05:54:40.713654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:33.275 [2024-12-12 05:54:40.713702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:33.275 [2024-12-12 05:54:40.713792] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:33.275 [2024-12-12 05:54:40.713806] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:33.275 [2024-12-12 05:54:40.714098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:33.275 [2024-12-12 05:54:40.721028] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:33.275 [2024-12-12 05:54:40.721052] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:33.275 [2024-12-12 05:54:40.721256] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.275 "name": "raid_bdev1", 00:16:33.275 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:33.275 "strip_size_kb": 64, 00:16:33.275 "state": "online", 00:16:33.275 "raid_level": "raid5f", 00:16:33.275 "superblock": false, 00:16:33.275 "num_base_bdevs": 4, 00:16:33.275 "num_base_bdevs_discovered": 4, 00:16:33.275 "num_base_bdevs_operational": 4, 00:16:33.275 "base_bdevs_list": [ 00:16:33.275 { 00:16:33.275 "name": "BaseBdev1", 00:16:33.275 "uuid": "f5e50f20-7603-5435-bdc6-d5cff9560ea4", 00:16:33.275 "is_configured": true, 00:16:33.275 "data_offset": 0, 00:16:33.275 "data_size": 65536 00:16:33.275 }, 00:16:33.275 { 00:16:33.275 "name": "BaseBdev2", 00:16:33.275 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:33.275 "is_configured": true, 00:16:33.275 "data_offset": 0, 00:16:33.275 "data_size": 65536 00:16:33.275 }, 00:16:33.275 { 00:16:33.275 "name": "BaseBdev3", 00:16:33.275 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:33.275 "is_configured": true, 00:16:33.275 "data_offset": 0, 00:16:33.275 "data_size": 65536 00:16:33.275 }, 00:16:33.275 { 00:16:33.275 "name": "BaseBdev4", 00:16:33.275 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:33.275 "is_configured": true, 00:16:33.275 "data_offset": 0, 00:16:33.275 "data_size": 65536 00:16:33.275 } 00:16:33.275 ] 00:16:33.275 }' 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.275 05:54:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 [2024-12-12 05:54:41.200840] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:33.845 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:34.105 [2024-12-12 05:54:41.468203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:34.105 /dev/nbd0 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:34.105 1+0 records in 00:16:34.105 1+0 records out 00:16:34.105 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039049 s, 10.5 MB/s 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:34.105 05:54:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:16:34.675 512+0 records in 00:16:34.675 512+0 records out 00:16:34.675 100663296 bytes (101 MB, 96 MiB) copied, 0.460207 s, 219 MB/s 00:16:34.675 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:34.675 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:34.675 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:34.676 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:34.676 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:34.676 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:34.676 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:34.936 [2024-12-12 05:54:42.215547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.936 [2024-12-12 05:54:42.229768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:34.936 "name": "raid_bdev1", 00:16:34.936 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:34.936 "strip_size_kb": 64, 00:16:34.936 "state": "online", 00:16:34.936 "raid_level": "raid5f", 00:16:34.936 "superblock": false, 00:16:34.936 "num_base_bdevs": 4, 00:16:34.936 "num_base_bdevs_discovered": 3, 00:16:34.936 "num_base_bdevs_operational": 3, 00:16:34.936 "base_bdevs_list": [ 00:16:34.936 { 00:16:34.936 "name": null, 00:16:34.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:34.936 "is_configured": false, 00:16:34.936 "data_offset": 0, 00:16:34.936 "data_size": 65536 00:16:34.936 }, 00:16:34.936 { 00:16:34.936 "name": "BaseBdev2", 00:16:34.936 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:34.936 "is_configured": true, 00:16:34.936 "data_offset": 0, 00:16:34.936 "data_size": 65536 00:16:34.936 }, 00:16:34.936 { 00:16:34.936 "name": "BaseBdev3", 00:16:34.936 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:34.936 "is_configured": true, 00:16:34.936 "data_offset": 0, 00:16:34.936 "data_size": 65536 00:16:34.936 }, 00:16:34.936 { 00:16:34.936 "name": "BaseBdev4", 00:16:34.936 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:34.936 "is_configured": true, 00:16:34.936 "data_offset": 0, 00:16:34.936 "data_size": 65536 00:16:34.936 } 00:16:34.936 ] 00:16:34.936 }' 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:34.936 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.196 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.196 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.196 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:35.196 [2024-12-12 05:54:42.696948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:35.196 [2024-12-12 05:54:42.711099] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:35.196 05:54:42 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.196 05:54:42 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:35.456 [2024-12-12 05:54:42.719966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.396 "name": "raid_bdev1", 00:16:36.396 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:36.396 "strip_size_kb": 64, 00:16:36.396 "state": "online", 00:16:36.396 "raid_level": "raid5f", 00:16:36.396 "superblock": false, 00:16:36.396 "num_base_bdevs": 4, 00:16:36.396 "num_base_bdevs_discovered": 4, 00:16:36.396 "num_base_bdevs_operational": 4, 00:16:36.396 "process": { 00:16:36.396 "type": "rebuild", 00:16:36.396 "target": "spare", 00:16:36.396 "progress": { 00:16:36.396 "blocks": 19200, 00:16:36.396 "percent": 9 00:16:36.396 } 00:16:36.396 }, 00:16:36.396 "base_bdevs_list": [ 00:16:36.396 { 00:16:36.396 "name": "spare", 00:16:36.396 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:36.396 "is_configured": true, 00:16:36.396 "data_offset": 0, 00:16:36.396 "data_size": 65536 00:16:36.396 }, 00:16:36.396 { 00:16:36.396 "name": "BaseBdev2", 00:16:36.396 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:36.396 "is_configured": true, 00:16:36.396 "data_offset": 0, 00:16:36.396 "data_size": 65536 00:16:36.396 }, 00:16:36.396 { 00:16:36.396 "name": "BaseBdev3", 00:16:36.396 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:36.396 "is_configured": true, 00:16:36.396 "data_offset": 0, 00:16:36.396 "data_size": 65536 00:16:36.396 }, 00:16:36.396 { 00:16:36.396 "name": "BaseBdev4", 00:16:36.396 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:36.396 "is_configured": true, 00:16:36.396 "data_offset": 0, 00:16:36.396 "data_size": 65536 00:16:36.396 } 00:16:36.396 ] 00:16:36.396 }' 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.396 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:36.397 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:36.397 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.397 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.397 [2024-12-12 05:54:43.878721] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.657 [2024-12-12 05:54:43.925892] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:36.657 [2024-12-12 05:54:43.926019] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.657 [2024-12-12 05:54:43.926058] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:36.657 [2024-12-12 05:54:43.926097] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.657 05:54:43 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.657 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.657 "name": "raid_bdev1", 00:16:36.657 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:36.657 "strip_size_kb": 64, 00:16:36.657 "state": "online", 00:16:36.657 "raid_level": "raid5f", 00:16:36.657 "superblock": false, 00:16:36.657 "num_base_bdevs": 4, 00:16:36.657 "num_base_bdevs_discovered": 3, 00:16:36.657 "num_base_bdevs_operational": 3, 00:16:36.657 "base_bdevs_list": [ 00:16:36.657 { 00:16:36.657 "name": null, 00:16:36.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.657 "is_configured": false, 00:16:36.657 "data_offset": 0, 00:16:36.657 "data_size": 65536 00:16:36.657 }, 00:16:36.657 { 00:16:36.657 "name": "BaseBdev2", 00:16:36.657 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:36.657 "is_configured": true, 00:16:36.657 "data_offset": 0, 00:16:36.657 "data_size": 65536 00:16:36.657 }, 00:16:36.657 { 00:16:36.657 "name": "BaseBdev3", 00:16:36.657 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:36.657 "is_configured": true, 00:16:36.657 "data_offset": 0, 00:16:36.657 "data_size": 65536 00:16:36.657 }, 00:16:36.657 { 00:16:36.657 "name": "BaseBdev4", 00:16:36.657 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:36.657 "is_configured": true, 00:16:36.657 "data_offset": 0, 00:16:36.657 "data_size": 65536 00:16:36.657 } 00:16:36.657 ] 00:16:36.657 }' 00:16:36.657 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.657 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.917 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.917 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.917 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.918 "name": "raid_bdev1", 00:16:36.918 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:36.918 "strip_size_kb": 64, 00:16:36.918 "state": "online", 00:16:36.918 "raid_level": "raid5f", 00:16:36.918 "superblock": false, 00:16:36.918 "num_base_bdevs": 4, 00:16:36.918 "num_base_bdevs_discovered": 3, 00:16:36.918 "num_base_bdevs_operational": 3, 00:16:36.918 "base_bdevs_list": [ 00:16:36.918 { 00:16:36.918 "name": null, 00:16:36.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:36.918 "is_configured": false, 00:16:36.918 "data_offset": 0, 00:16:36.918 "data_size": 65536 00:16:36.918 }, 00:16:36.918 { 00:16:36.918 "name": "BaseBdev2", 00:16:36.918 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:36.918 "is_configured": true, 00:16:36.918 "data_offset": 0, 00:16:36.918 "data_size": 65536 00:16:36.918 }, 00:16:36.918 { 00:16:36.918 "name": "BaseBdev3", 00:16:36.918 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:36.918 "is_configured": true, 00:16:36.918 "data_offset": 0, 00:16:36.918 "data_size": 65536 00:16:36.918 }, 00:16:36.918 { 00:16:36.918 "name": "BaseBdev4", 00:16:36.918 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:36.918 "is_configured": true, 00:16:36.918 "data_offset": 0, 00:16:36.918 "data_size": 65536 00:16:36.918 } 00:16:36.918 ] 00:16:36.918 }' 00:16:36.918 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.178 [2024-12-12 05:54:44.514668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:37.178 [2024-12-12 05:54:44.530543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.178 05:54:44 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:37.178 [2024-12-12 05:54:44.539827] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.118 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.119 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.119 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.119 "name": "raid_bdev1", 00:16:38.119 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:38.119 "strip_size_kb": 64, 00:16:38.119 "state": "online", 00:16:38.119 "raid_level": "raid5f", 00:16:38.119 "superblock": false, 00:16:38.119 "num_base_bdevs": 4, 00:16:38.119 "num_base_bdevs_discovered": 4, 00:16:38.119 "num_base_bdevs_operational": 4, 00:16:38.119 "process": { 00:16:38.119 "type": "rebuild", 00:16:38.119 "target": "spare", 00:16:38.119 "progress": { 00:16:38.119 "blocks": 19200, 00:16:38.119 "percent": 9 00:16:38.119 } 00:16:38.119 }, 00:16:38.119 "base_bdevs_list": [ 00:16:38.119 { 00:16:38.119 "name": "spare", 00:16:38.119 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:38.119 "is_configured": true, 00:16:38.119 "data_offset": 0, 00:16:38.119 "data_size": 65536 00:16:38.119 }, 00:16:38.119 { 00:16:38.119 "name": "BaseBdev2", 00:16:38.119 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:38.119 "is_configured": true, 00:16:38.119 "data_offset": 0, 00:16:38.119 "data_size": 65536 00:16:38.119 }, 00:16:38.119 { 00:16:38.119 "name": "BaseBdev3", 00:16:38.119 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:38.119 "is_configured": true, 00:16:38.119 "data_offset": 0, 00:16:38.119 "data_size": 65536 00:16:38.119 }, 00:16:38.119 { 00:16:38.119 "name": "BaseBdev4", 00:16:38.119 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:38.119 "is_configured": true, 00:16:38.119 "data_offset": 0, 00:16:38.119 "data_size": 65536 00:16:38.119 } 00:16:38.119 ] 00:16:38.119 }' 00:16:38.119 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.119 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.119 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=599 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.379 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.379 "name": "raid_bdev1", 00:16:38.379 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:38.379 "strip_size_kb": 64, 00:16:38.379 "state": "online", 00:16:38.379 "raid_level": "raid5f", 00:16:38.379 "superblock": false, 00:16:38.379 "num_base_bdevs": 4, 00:16:38.380 "num_base_bdevs_discovered": 4, 00:16:38.380 "num_base_bdevs_operational": 4, 00:16:38.380 "process": { 00:16:38.380 "type": "rebuild", 00:16:38.380 "target": "spare", 00:16:38.380 "progress": { 00:16:38.380 "blocks": 21120, 00:16:38.380 "percent": 10 00:16:38.380 } 00:16:38.380 }, 00:16:38.380 "base_bdevs_list": [ 00:16:38.380 { 00:16:38.380 "name": "spare", 00:16:38.380 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:38.380 "is_configured": true, 00:16:38.380 "data_offset": 0, 00:16:38.380 "data_size": 65536 00:16:38.380 }, 00:16:38.380 { 00:16:38.380 "name": "BaseBdev2", 00:16:38.380 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:38.380 "is_configured": true, 00:16:38.380 "data_offset": 0, 00:16:38.380 "data_size": 65536 00:16:38.380 }, 00:16:38.380 { 00:16:38.380 "name": "BaseBdev3", 00:16:38.380 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:38.380 "is_configured": true, 00:16:38.380 "data_offset": 0, 00:16:38.380 "data_size": 65536 00:16:38.380 }, 00:16:38.380 { 00:16:38.380 "name": "BaseBdev4", 00:16:38.380 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:38.380 "is_configured": true, 00:16:38.380 "data_offset": 0, 00:16:38.380 "data_size": 65536 00:16:38.380 } 00:16:38.380 ] 00:16:38.380 }' 00:16:38.380 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:38.380 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:38.380 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:38.380 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:38.380 05:54:45 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.320 05:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.580 "name": "raid_bdev1", 00:16:39.580 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:39.580 "strip_size_kb": 64, 00:16:39.580 "state": "online", 00:16:39.580 "raid_level": "raid5f", 00:16:39.580 "superblock": false, 00:16:39.580 "num_base_bdevs": 4, 00:16:39.580 "num_base_bdevs_discovered": 4, 00:16:39.580 "num_base_bdevs_operational": 4, 00:16:39.580 "process": { 00:16:39.580 "type": "rebuild", 00:16:39.580 "target": "spare", 00:16:39.580 "progress": { 00:16:39.580 "blocks": 42240, 00:16:39.580 "percent": 21 00:16:39.580 } 00:16:39.580 }, 00:16:39.580 "base_bdevs_list": [ 00:16:39.580 { 00:16:39.580 "name": "spare", 00:16:39.580 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:39.580 "is_configured": true, 00:16:39.580 "data_offset": 0, 00:16:39.580 "data_size": 65536 00:16:39.580 }, 00:16:39.580 { 00:16:39.580 "name": "BaseBdev2", 00:16:39.580 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:39.580 "is_configured": true, 00:16:39.580 "data_offset": 0, 00:16:39.580 "data_size": 65536 00:16:39.580 }, 00:16:39.580 { 00:16:39.580 "name": "BaseBdev3", 00:16:39.580 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:39.580 "is_configured": true, 00:16:39.580 "data_offset": 0, 00:16:39.580 "data_size": 65536 00:16:39.580 }, 00:16:39.580 { 00:16:39.580 "name": "BaseBdev4", 00:16:39.580 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:39.580 "is_configured": true, 00:16:39.580 "data_offset": 0, 00:16:39.580 "data_size": 65536 00:16:39.580 } 00:16:39.580 ] 00:16:39.580 }' 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.580 05:54:46 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.519 05:54:47 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.519 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.519 "name": "raid_bdev1", 00:16:40.519 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:40.519 "strip_size_kb": 64, 00:16:40.519 "state": "online", 00:16:40.519 "raid_level": "raid5f", 00:16:40.519 "superblock": false, 00:16:40.519 "num_base_bdevs": 4, 00:16:40.519 "num_base_bdevs_discovered": 4, 00:16:40.519 "num_base_bdevs_operational": 4, 00:16:40.519 "process": { 00:16:40.519 "type": "rebuild", 00:16:40.519 "target": "spare", 00:16:40.519 "progress": { 00:16:40.519 "blocks": 65280, 00:16:40.519 "percent": 33 00:16:40.519 } 00:16:40.519 }, 00:16:40.519 "base_bdevs_list": [ 00:16:40.519 { 00:16:40.519 "name": "spare", 00:16:40.519 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:40.519 "is_configured": true, 00:16:40.519 "data_offset": 0, 00:16:40.519 "data_size": 65536 00:16:40.519 }, 00:16:40.519 { 00:16:40.519 "name": "BaseBdev2", 00:16:40.519 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:40.519 "is_configured": true, 00:16:40.520 "data_offset": 0, 00:16:40.520 "data_size": 65536 00:16:40.520 }, 00:16:40.520 { 00:16:40.520 "name": "BaseBdev3", 00:16:40.520 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:40.520 "is_configured": true, 00:16:40.520 "data_offset": 0, 00:16:40.520 "data_size": 65536 00:16:40.520 }, 00:16:40.520 { 00:16:40.520 "name": "BaseBdev4", 00:16:40.520 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:40.520 "is_configured": true, 00:16:40.520 "data_offset": 0, 00:16:40.520 "data_size": 65536 00:16:40.520 } 00:16:40.520 ] 00:16:40.520 }' 00:16:40.520 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.779 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.779 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.779 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.779 05:54:48 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.720 "name": "raid_bdev1", 00:16:41.720 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:41.720 "strip_size_kb": 64, 00:16:41.720 "state": "online", 00:16:41.720 "raid_level": "raid5f", 00:16:41.720 "superblock": false, 00:16:41.720 "num_base_bdevs": 4, 00:16:41.720 "num_base_bdevs_discovered": 4, 00:16:41.720 "num_base_bdevs_operational": 4, 00:16:41.720 "process": { 00:16:41.720 "type": "rebuild", 00:16:41.720 "target": "spare", 00:16:41.720 "progress": { 00:16:41.720 "blocks": 86400, 00:16:41.720 "percent": 43 00:16:41.720 } 00:16:41.720 }, 00:16:41.720 "base_bdevs_list": [ 00:16:41.720 { 00:16:41.720 "name": "spare", 00:16:41.720 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:41.720 "is_configured": true, 00:16:41.720 "data_offset": 0, 00:16:41.720 "data_size": 65536 00:16:41.720 }, 00:16:41.720 { 00:16:41.720 "name": "BaseBdev2", 00:16:41.720 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:41.720 "is_configured": true, 00:16:41.720 "data_offset": 0, 00:16:41.720 "data_size": 65536 00:16:41.720 }, 00:16:41.720 { 00:16:41.720 "name": "BaseBdev3", 00:16:41.720 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:41.720 "is_configured": true, 00:16:41.720 "data_offset": 0, 00:16:41.720 "data_size": 65536 00:16:41.720 }, 00:16:41.720 { 00:16:41.720 "name": "BaseBdev4", 00:16:41.720 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:41.720 "is_configured": true, 00:16:41.720 "data_offset": 0, 00:16:41.720 "data_size": 65536 00:16:41.720 } 00:16:41.720 ] 00:16:41.720 }' 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:41.720 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.980 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:41.980 05:54:49 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.920 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.920 "name": "raid_bdev1", 00:16:42.920 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:42.920 "strip_size_kb": 64, 00:16:42.920 "state": "online", 00:16:42.920 "raid_level": "raid5f", 00:16:42.921 "superblock": false, 00:16:42.921 "num_base_bdevs": 4, 00:16:42.921 "num_base_bdevs_discovered": 4, 00:16:42.921 "num_base_bdevs_operational": 4, 00:16:42.921 "process": { 00:16:42.921 "type": "rebuild", 00:16:42.921 "target": "spare", 00:16:42.921 "progress": { 00:16:42.921 "blocks": 109440, 00:16:42.921 "percent": 55 00:16:42.921 } 00:16:42.921 }, 00:16:42.921 "base_bdevs_list": [ 00:16:42.921 { 00:16:42.921 "name": "spare", 00:16:42.921 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:42.921 "is_configured": true, 00:16:42.921 "data_offset": 0, 00:16:42.921 "data_size": 65536 00:16:42.921 }, 00:16:42.921 { 00:16:42.921 "name": "BaseBdev2", 00:16:42.921 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:42.921 "is_configured": true, 00:16:42.921 "data_offset": 0, 00:16:42.921 "data_size": 65536 00:16:42.921 }, 00:16:42.921 { 00:16:42.921 "name": "BaseBdev3", 00:16:42.921 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:42.921 "is_configured": true, 00:16:42.921 "data_offset": 0, 00:16:42.921 "data_size": 65536 00:16:42.921 }, 00:16:42.921 { 00:16:42.921 "name": "BaseBdev4", 00:16:42.921 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:42.921 "is_configured": true, 00:16:42.921 "data_offset": 0, 00:16:42.921 "data_size": 65536 00:16:42.921 } 00:16:42.921 ] 00:16:42.921 }' 00:16:42.921 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.921 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.921 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.921 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.921 05:54:50 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.302 05:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.303 "name": "raid_bdev1", 00:16:44.303 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:44.303 "strip_size_kb": 64, 00:16:44.303 "state": "online", 00:16:44.303 "raid_level": "raid5f", 00:16:44.303 "superblock": false, 00:16:44.303 "num_base_bdevs": 4, 00:16:44.303 "num_base_bdevs_discovered": 4, 00:16:44.303 "num_base_bdevs_operational": 4, 00:16:44.303 "process": { 00:16:44.303 "type": "rebuild", 00:16:44.303 "target": "spare", 00:16:44.303 "progress": { 00:16:44.303 "blocks": 130560, 00:16:44.303 "percent": 66 00:16:44.303 } 00:16:44.303 }, 00:16:44.303 "base_bdevs_list": [ 00:16:44.303 { 00:16:44.303 "name": "spare", 00:16:44.303 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:44.303 "is_configured": true, 00:16:44.303 "data_offset": 0, 00:16:44.303 "data_size": 65536 00:16:44.303 }, 00:16:44.303 { 00:16:44.303 "name": "BaseBdev2", 00:16:44.303 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:44.303 "is_configured": true, 00:16:44.303 "data_offset": 0, 00:16:44.303 "data_size": 65536 00:16:44.303 }, 00:16:44.303 { 00:16:44.303 "name": "BaseBdev3", 00:16:44.303 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:44.303 "is_configured": true, 00:16:44.303 "data_offset": 0, 00:16:44.303 "data_size": 65536 00:16:44.303 }, 00:16:44.303 { 00:16:44.303 "name": "BaseBdev4", 00:16:44.303 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:44.303 "is_configured": true, 00:16:44.303 "data_offset": 0, 00:16:44.303 "data_size": 65536 00:16:44.303 } 00:16:44.303 ] 00:16:44.303 }' 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:44.303 05:54:51 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.240 "name": "raid_bdev1", 00:16:45.240 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:45.240 "strip_size_kb": 64, 00:16:45.240 "state": "online", 00:16:45.240 "raid_level": "raid5f", 00:16:45.240 "superblock": false, 00:16:45.240 "num_base_bdevs": 4, 00:16:45.240 "num_base_bdevs_discovered": 4, 00:16:45.240 "num_base_bdevs_operational": 4, 00:16:45.240 "process": { 00:16:45.240 "type": "rebuild", 00:16:45.240 "target": "spare", 00:16:45.240 "progress": { 00:16:45.240 "blocks": 153600, 00:16:45.240 "percent": 78 00:16:45.240 } 00:16:45.240 }, 00:16:45.240 "base_bdevs_list": [ 00:16:45.240 { 00:16:45.240 "name": "spare", 00:16:45.240 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:45.240 "is_configured": true, 00:16:45.240 "data_offset": 0, 00:16:45.240 "data_size": 65536 00:16:45.240 }, 00:16:45.240 { 00:16:45.240 "name": "BaseBdev2", 00:16:45.240 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:45.240 "is_configured": true, 00:16:45.240 "data_offset": 0, 00:16:45.240 "data_size": 65536 00:16:45.240 }, 00:16:45.240 { 00:16:45.240 "name": "BaseBdev3", 00:16:45.240 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:45.240 "is_configured": true, 00:16:45.240 "data_offset": 0, 00:16:45.240 "data_size": 65536 00:16:45.240 }, 00:16:45.240 { 00:16:45.240 "name": "BaseBdev4", 00:16:45.240 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:45.240 "is_configured": true, 00:16:45.240 "data_offset": 0, 00:16:45.240 "data_size": 65536 00:16:45.240 } 00:16:45.240 ] 00:16:45.240 }' 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.240 05:54:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.620 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.620 "name": "raid_bdev1", 00:16:46.620 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:46.620 "strip_size_kb": 64, 00:16:46.621 "state": "online", 00:16:46.621 "raid_level": "raid5f", 00:16:46.621 "superblock": false, 00:16:46.621 "num_base_bdevs": 4, 00:16:46.621 "num_base_bdevs_discovered": 4, 00:16:46.621 "num_base_bdevs_operational": 4, 00:16:46.621 "process": { 00:16:46.621 "type": "rebuild", 00:16:46.621 "target": "spare", 00:16:46.621 "progress": { 00:16:46.621 "blocks": 174720, 00:16:46.621 "percent": 88 00:16:46.621 } 00:16:46.621 }, 00:16:46.621 "base_bdevs_list": [ 00:16:46.621 { 00:16:46.621 "name": "spare", 00:16:46.621 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 0, 00:16:46.621 "data_size": 65536 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev2", 00:16:46.621 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 0, 00:16:46.621 "data_size": 65536 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev3", 00:16:46.621 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 0, 00:16:46.621 "data_size": 65536 00:16:46.621 }, 00:16:46.621 { 00:16:46.621 "name": "BaseBdev4", 00:16:46.621 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:46.621 "is_configured": true, 00:16:46.621 "data_offset": 0, 00:16:46.621 "data_size": 65536 00:16:46.621 } 00:16:46.621 ] 00:16:46.621 }' 00:16:46.621 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.621 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:46.621 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.621 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:46.621 05:54:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.560 [2024-12-12 05:54:54.887412] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:47.560 [2024-12-12 05:54:54.887479] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:47.560 [2024-12-12 05:54:54.887534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.560 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.560 "name": "raid_bdev1", 00:16:47.560 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:47.560 "strip_size_kb": 64, 00:16:47.560 "state": "online", 00:16:47.560 "raid_level": "raid5f", 00:16:47.560 "superblock": false, 00:16:47.560 "num_base_bdevs": 4, 00:16:47.560 "num_base_bdevs_discovered": 4, 00:16:47.560 "num_base_bdevs_operational": 4, 00:16:47.560 "base_bdevs_list": [ 00:16:47.560 { 00:16:47.560 "name": "spare", 00:16:47.560 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev2", 00:16:47.560 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev3", 00:16:47.560 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:47.560 "is_configured": true, 00:16:47.560 "data_offset": 0, 00:16:47.560 "data_size": 65536 00:16:47.560 }, 00:16:47.560 { 00:16:47.560 "name": "BaseBdev4", 00:16:47.560 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:47.560 "is_configured": true, 00:16:47.561 "data_offset": 0, 00:16:47.561 "data_size": 65536 00:16:47.561 } 00:16:47.561 ] 00:16:47.561 }' 00:16:47.561 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.561 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:47.561 05:54:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.561 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.820 "name": "raid_bdev1", 00:16:47.820 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:47.820 "strip_size_kb": 64, 00:16:47.820 "state": "online", 00:16:47.820 "raid_level": "raid5f", 00:16:47.820 "superblock": false, 00:16:47.820 "num_base_bdevs": 4, 00:16:47.820 "num_base_bdevs_discovered": 4, 00:16:47.820 "num_base_bdevs_operational": 4, 00:16:47.820 "base_bdevs_list": [ 00:16:47.820 { 00:16:47.820 "name": "spare", 00:16:47.820 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:47.820 "is_configured": true, 00:16:47.820 "data_offset": 0, 00:16:47.820 "data_size": 65536 00:16:47.820 }, 00:16:47.820 { 00:16:47.820 "name": "BaseBdev2", 00:16:47.820 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:47.820 "is_configured": true, 00:16:47.820 "data_offset": 0, 00:16:47.820 "data_size": 65536 00:16:47.820 }, 00:16:47.820 { 00:16:47.820 "name": "BaseBdev3", 00:16:47.820 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:47.820 "is_configured": true, 00:16:47.820 "data_offset": 0, 00:16:47.820 "data_size": 65536 00:16:47.820 }, 00:16:47.820 { 00:16:47.820 "name": "BaseBdev4", 00:16:47.820 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:47.820 "is_configured": true, 00:16:47.820 "data_offset": 0, 00:16:47.820 "data_size": 65536 00:16:47.820 } 00:16:47.820 ] 00:16:47.820 }' 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.820 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.821 "name": "raid_bdev1", 00:16:47.821 "uuid": "6ba367bd-f2a0-43de-b242-efd2c336d4ea", 00:16:47.821 "strip_size_kb": 64, 00:16:47.821 "state": "online", 00:16:47.821 "raid_level": "raid5f", 00:16:47.821 "superblock": false, 00:16:47.821 "num_base_bdevs": 4, 00:16:47.821 "num_base_bdevs_discovered": 4, 00:16:47.821 "num_base_bdevs_operational": 4, 00:16:47.821 "base_bdevs_list": [ 00:16:47.821 { 00:16:47.821 "name": "spare", 00:16:47.821 "uuid": "ba60fe7a-4cf9-5317-8f17-1745439aea72", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev2", 00:16:47.821 "uuid": "77720356-26ce-5ba4-8b63-7ea7fffb300c", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev3", 00:16:47.821 "uuid": "fe624053-2630-5248-a469-1e349d5faab4", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 }, 00:16:47.821 { 00:16:47.821 "name": "BaseBdev4", 00:16:47.821 "uuid": "63c6f605-4bbb-5548-b636-1dc85b7d56af", 00:16:47.821 "is_configured": true, 00:16:47.821 "data_offset": 0, 00:16:47.821 "data_size": 65536 00:16:47.821 } 00:16:47.821 ] 00:16:47.821 }' 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.821 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.390 [2024-12-12 05:54:55.641230] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:48.390 [2024-12-12 05:54:55.641313] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:48.390 [2024-12-12 05:54:55.641425] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:48.390 [2024-12-12 05:54:55.641562] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:48.390 [2024-12-12 05:54:55.641611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:48.390 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.391 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:48.391 /dev/nbd0 00:16:48.391 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:48.391 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:48.391 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:48.391 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.650 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.651 1+0 records in 00:16:48.651 1+0 records out 00:16:48.651 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000560646 s, 7.3 MB/s 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.651 05:54:55 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:48.651 /dev/nbd1 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:48.651 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:48.911 1+0 records in 00:16:48.911 1+0 records out 00:16:48.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040778 s, 10.0 MB/s 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:48.911 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:49.170 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84394 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84394 ']' 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84394 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84394 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:49.430 killing process with pid 84394 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84394' 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84394 00:16:49.430 Received shutdown signal, test time was about 60.000000 seconds 00:16:49.430 00:16:49.430 Latency(us) 00:16:49.430 [2024-12-12T05:54:56.952Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.430 [2024-12-12T05:54:56.952Z] =================================================================================================================== 00:16:49.430 [2024-12-12T05:54:56.952Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:49.430 [2024-12-12 05:54:56.812208] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:49.430 05:54:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84394 00:16:49.998 [2024-12-12 05:54:57.276470] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:50.938 00:16:50.938 real 0m18.836s 00:16:50.938 user 0m22.667s 00:16:50.938 sys 0m2.195s 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:50.938 ************************************ 00:16:50.938 END TEST raid5f_rebuild_test 00:16:50.938 ************************************ 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.938 05:54:58 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:16:50.938 05:54:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:50.938 05:54:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:50.938 05:54:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:50.938 ************************************ 00:16:50.938 START TEST raid5f_rebuild_test_sb 00:16:50.938 ************************************ 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=84790 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 84790 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 84790 ']' 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.938 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.939 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.939 05:54:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:51.199 [2024-12-12 05:54:58.487776] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:16:51.199 [2024-12-12 05:54:58.487968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:51.199 Zero copy mechanism will not be used. 00:16:51.199 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84790 ] 00:16:51.199 [2024-12-12 05:54:58.659158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.458 [2024-12-12 05:54:58.767543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.458 [2024-12-12 05:54:58.941272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:51.458 [2024-12-12 05:54:58.941311] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 BaseBdev1_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 [2024-12-12 05:54:59.346604] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:52.029 [2024-12-12 05:54:59.346664] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.029 [2024-12-12 05:54:59.346686] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:52.029 [2024-12-12 05:54:59.346697] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.029 [2024-12-12 05:54:59.348687] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.029 [2024-12-12 05:54:59.348729] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:52.029 BaseBdev1 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 BaseBdev2_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 [2024-12-12 05:54:59.399396] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:52.029 [2024-12-12 05:54:59.399457] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.029 [2024-12-12 05:54:59.399476] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:52.029 [2024-12-12 05:54:59.399486] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.029 [2024-12-12 05:54:59.401487] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.029 [2024-12-12 05:54:59.401536] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:52.029 BaseBdev2 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 BaseBdev3_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 [2024-12-12 05:54:59.483905] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:52.029 [2024-12-12 05:54:59.483957] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.029 [2024-12-12 05:54:59.483994] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:52.029 [2024-12-12 05:54:59.484005] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.029 [2024-12-12 05:54:59.485955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.029 [2024-12-12 05:54:59.486064] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:52.029 BaseBdev3 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 BaseBdev4_malloc 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.029 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.029 [2024-12-12 05:54:59.535913] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:52.029 [2024-12-12 05:54:59.535966] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.030 [2024-12-12 05:54:59.536002] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:52.030 [2024-12-12 05:54:59.536012] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.030 [2024-12-12 05:54:59.537942] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.030 [2024-12-12 05:54:59.537983] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:52.030 BaseBdev4 00:16:52.030 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.030 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:52.030 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.030 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 spare_malloc 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 spare_delay 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 [2024-12-12 05:54:59.598574] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:52.290 [2024-12-12 05:54:59.598675] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:52.290 [2024-12-12 05:54:59.598731] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:52.290 [2024-12-12 05:54:59.598746] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:52.290 [2024-12-12 05:54:59.600759] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:52.290 [2024-12-12 05:54:59.600799] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:52.290 spare 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 [2024-12-12 05:54:59.610603] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.290 [2024-12-12 05:54:59.612347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:52.290 [2024-12-12 05:54:59.612404] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:52.290 [2024-12-12 05:54:59.612451] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:52.290 [2024-12-12 05:54:59.612660] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:52.290 [2024-12-12 05:54:59.612675] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:16:52.290 [2024-12-12 05:54:59.612900] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:52.290 [2024-12-12 05:54:59.619912] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:52.290 [2024-12-12 05:54:59.619969] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:52.290 [2024-12-12 05:54:59.620186] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.290 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.290 "name": "raid_bdev1", 00:16:52.290 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:52.290 "strip_size_kb": 64, 00:16:52.290 "state": "online", 00:16:52.290 "raid_level": "raid5f", 00:16:52.290 "superblock": true, 00:16:52.290 "num_base_bdevs": 4, 00:16:52.290 "num_base_bdevs_discovered": 4, 00:16:52.291 "num_base_bdevs_operational": 4, 00:16:52.291 "base_bdevs_list": [ 00:16:52.291 { 00:16:52.291 "name": "BaseBdev1", 00:16:52.291 "uuid": "12b67854-381f-518d-8008-4f3dbadea239", 00:16:52.291 "is_configured": true, 00:16:52.291 "data_offset": 2048, 00:16:52.291 "data_size": 63488 00:16:52.291 }, 00:16:52.291 { 00:16:52.291 "name": "BaseBdev2", 00:16:52.291 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:52.291 "is_configured": true, 00:16:52.291 "data_offset": 2048, 00:16:52.291 "data_size": 63488 00:16:52.291 }, 00:16:52.291 { 00:16:52.291 "name": "BaseBdev3", 00:16:52.291 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:52.291 "is_configured": true, 00:16:52.291 "data_offset": 2048, 00:16:52.291 "data_size": 63488 00:16:52.291 }, 00:16:52.291 { 00:16:52.291 "name": "BaseBdev4", 00:16:52.291 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:52.291 "is_configured": true, 00:16:52.291 "data_offset": 2048, 00:16:52.291 "data_size": 63488 00:16:52.291 } 00:16:52.291 ] 00:16:52.291 }' 00:16:52.291 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.291 05:54:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:52.550 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.550 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.550 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:52.550 [2024-12-12 05:55:00.039804] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:52.551 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:52.810 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:52.810 [2024-12-12 05:55:00.315209] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:16:53.070 /dev/nbd0 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:53.070 1+0 records in 00:16:53.070 1+0 records out 00:16:53.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00034588 s, 11.8 MB/s 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:16:53.070 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:16:53.330 496+0 records in 00:16:53.330 496+0 records out 00:16:53.330 97517568 bytes (98 MB, 93 MiB) copied, 0.434999 s, 224 MB/s 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:53.330 05:55:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:53.590 [2024-12-12 05:55:01.010992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.590 [2024-12-12 05:55:01.041567] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.590 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.590 "name": "raid_bdev1", 00:16:53.590 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:53.590 "strip_size_kb": 64, 00:16:53.590 "state": "online", 00:16:53.590 "raid_level": "raid5f", 00:16:53.590 "superblock": true, 00:16:53.590 "num_base_bdevs": 4, 00:16:53.590 "num_base_bdevs_discovered": 3, 00:16:53.590 "num_base_bdevs_operational": 3, 00:16:53.590 "base_bdevs_list": [ 00:16:53.590 { 00:16:53.590 "name": null, 00:16:53.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.590 "is_configured": false, 00:16:53.591 "data_offset": 0, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev2", 00:16:53.591 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev3", 00:16:53.591 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 }, 00:16:53.591 { 00:16:53.591 "name": "BaseBdev4", 00:16:53.591 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:53.591 "is_configured": true, 00:16:53.591 "data_offset": 2048, 00:16:53.591 "data_size": 63488 00:16:53.591 } 00:16:53.591 ] 00:16:53.591 }' 00:16:53.591 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.591 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.160 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:54.160 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.160 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:54.160 [2024-12-12 05:55:01.468810] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:54.160 [2024-12-12 05:55:01.485539] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:16:54.160 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.160 05:55:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:54.160 [2024-12-12 05:55:01.494894] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.100 "name": "raid_bdev1", 00:16:55.100 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:55.100 "strip_size_kb": 64, 00:16:55.100 "state": "online", 00:16:55.100 "raid_level": "raid5f", 00:16:55.100 "superblock": true, 00:16:55.100 "num_base_bdevs": 4, 00:16:55.100 "num_base_bdevs_discovered": 4, 00:16:55.100 "num_base_bdevs_operational": 4, 00:16:55.100 "process": { 00:16:55.100 "type": "rebuild", 00:16:55.100 "target": "spare", 00:16:55.100 "progress": { 00:16:55.100 "blocks": 19200, 00:16:55.100 "percent": 10 00:16:55.100 } 00:16:55.100 }, 00:16:55.100 "base_bdevs_list": [ 00:16:55.100 { 00:16:55.100 "name": "spare", 00:16:55.100 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:16:55.100 "is_configured": true, 00:16:55.100 "data_offset": 2048, 00:16:55.100 "data_size": 63488 00:16:55.100 }, 00:16:55.100 { 00:16:55.100 "name": "BaseBdev2", 00:16:55.100 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:55.100 "is_configured": true, 00:16:55.100 "data_offset": 2048, 00:16:55.100 "data_size": 63488 00:16:55.100 }, 00:16:55.100 { 00:16:55.100 "name": "BaseBdev3", 00:16:55.100 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:55.100 "is_configured": true, 00:16:55.100 "data_offset": 2048, 00:16:55.100 "data_size": 63488 00:16:55.100 }, 00:16:55.100 { 00:16:55.100 "name": "BaseBdev4", 00:16:55.100 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:55.100 "is_configured": true, 00:16:55.100 "data_offset": 2048, 00:16:55.100 "data_size": 63488 00:16:55.100 } 00:16:55.100 ] 00:16:55.100 }' 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:55.100 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.360 [2024-12-12 05:55:02.629674] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.360 [2024-12-12 05:55:02.700721] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:55.360 [2024-12-12 05:55:02.700841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:55.360 [2024-12-12 05:55:02.700859] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:55.360 [2024-12-12 05:55:02.700872] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.360 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.361 "name": "raid_bdev1", 00:16:55.361 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:55.361 "strip_size_kb": 64, 00:16:55.361 "state": "online", 00:16:55.361 "raid_level": "raid5f", 00:16:55.361 "superblock": true, 00:16:55.361 "num_base_bdevs": 4, 00:16:55.361 "num_base_bdevs_discovered": 3, 00:16:55.361 "num_base_bdevs_operational": 3, 00:16:55.361 "base_bdevs_list": [ 00:16:55.361 { 00:16:55.361 "name": null, 00:16:55.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.361 "is_configured": false, 00:16:55.361 "data_offset": 0, 00:16:55.361 "data_size": 63488 00:16:55.361 }, 00:16:55.361 { 00:16:55.361 "name": "BaseBdev2", 00:16:55.361 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:55.361 "is_configured": true, 00:16:55.361 "data_offset": 2048, 00:16:55.361 "data_size": 63488 00:16:55.361 }, 00:16:55.361 { 00:16:55.361 "name": "BaseBdev3", 00:16:55.361 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:55.361 "is_configured": true, 00:16:55.361 "data_offset": 2048, 00:16:55.361 "data_size": 63488 00:16:55.361 }, 00:16:55.361 { 00:16:55.361 "name": "BaseBdev4", 00:16:55.361 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:55.361 "is_configured": true, 00:16:55.361 "data_offset": 2048, 00:16:55.361 "data_size": 63488 00:16:55.361 } 00:16:55.361 ] 00:16:55.361 }' 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.361 05:55:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:55.937 "name": "raid_bdev1", 00:16:55.937 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:55.937 "strip_size_kb": 64, 00:16:55.937 "state": "online", 00:16:55.937 "raid_level": "raid5f", 00:16:55.937 "superblock": true, 00:16:55.937 "num_base_bdevs": 4, 00:16:55.937 "num_base_bdevs_discovered": 3, 00:16:55.937 "num_base_bdevs_operational": 3, 00:16:55.937 "base_bdevs_list": [ 00:16:55.937 { 00:16:55.937 "name": null, 00:16:55.937 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.937 "is_configured": false, 00:16:55.937 "data_offset": 0, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "BaseBdev2", 00:16:55.937 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "BaseBdev3", 00:16:55.937 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 }, 00:16:55.937 { 00:16:55.937 "name": "BaseBdev4", 00:16:55.937 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:55.937 "is_configured": true, 00:16:55.937 "data_offset": 2048, 00:16:55.937 "data_size": 63488 00:16:55.937 } 00:16:55.937 ] 00:16:55.937 }' 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:55.937 [2024-12-12 05:55:03.337183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:55.937 [2024-12-12 05:55:03.351537] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.937 05:55:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:55.937 [2024-12-12 05:55:03.360574] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:56.876 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.136 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.136 "name": "raid_bdev1", 00:16:57.136 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:57.136 "strip_size_kb": 64, 00:16:57.136 "state": "online", 00:16:57.136 "raid_level": "raid5f", 00:16:57.136 "superblock": true, 00:16:57.136 "num_base_bdevs": 4, 00:16:57.136 "num_base_bdevs_discovered": 4, 00:16:57.136 "num_base_bdevs_operational": 4, 00:16:57.136 "process": { 00:16:57.136 "type": "rebuild", 00:16:57.136 "target": "spare", 00:16:57.136 "progress": { 00:16:57.136 "blocks": 19200, 00:16:57.136 "percent": 10 00:16:57.136 } 00:16:57.136 }, 00:16:57.136 "base_bdevs_list": [ 00:16:57.136 { 00:16:57.136 "name": "spare", 00:16:57.136 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:16:57.136 "is_configured": true, 00:16:57.136 "data_offset": 2048, 00:16:57.136 "data_size": 63488 00:16:57.136 }, 00:16:57.136 { 00:16:57.136 "name": "BaseBdev2", 00:16:57.136 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:57.136 "is_configured": true, 00:16:57.136 "data_offset": 2048, 00:16:57.136 "data_size": 63488 00:16:57.136 }, 00:16:57.136 { 00:16:57.136 "name": "BaseBdev3", 00:16:57.136 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:57.136 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 }, 00:16:57.137 { 00:16:57.137 "name": "BaseBdev4", 00:16:57.137 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:57.137 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 } 00:16:57.137 ] 00:16:57.137 }' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:57.137 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=618 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:57.137 "name": "raid_bdev1", 00:16:57.137 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:57.137 "strip_size_kb": 64, 00:16:57.137 "state": "online", 00:16:57.137 "raid_level": "raid5f", 00:16:57.137 "superblock": true, 00:16:57.137 "num_base_bdevs": 4, 00:16:57.137 "num_base_bdevs_discovered": 4, 00:16:57.137 "num_base_bdevs_operational": 4, 00:16:57.137 "process": { 00:16:57.137 "type": "rebuild", 00:16:57.137 "target": "spare", 00:16:57.137 "progress": { 00:16:57.137 "blocks": 21120, 00:16:57.137 "percent": 11 00:16:57.137 } 00:16:57.137 }, 00:16:57.137 "base_bdevs_list": [ 00:16:57.137 { 00:16:57.137 "name": "spare", 00:16:57.137 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:16:57.137 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 }, 00:16:57.137 { 00:16:57.137 "name": "BaseBdev2", 00:16:57.137 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:57.137 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 }, 00:16:57.137 { 00:16:57.137 "name": "BaseBdev3", 00:16:57.137 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:57.137 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 }, 00:16:57.137 { 00:16:57.137 "name": "BaseBdev4", 00:16:57.137 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:57.137 "is_configured": true, 00:16:57.137 "data_offset": 2048, 00:16:57.137 "data_size": 63488 00:16:57.137 } 00:16:57.137 ] 00:16:57.137 }' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:57.137 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:57.400 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:57.400 05:55:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:58.362 "name": "raid_bdev1", 00:16:58.362 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:58.362 "strip_size_kb": 64, 00:16:58.362 "state": "online", 00:16:58.362 "raid_level": "raid5f", 00:16:58.362 "superblock": true, 00:16:58.362 "num_base_bdevs": 4, 00:16:58.362 "num_base_bdevs_discovered": 4, 00:16:58.362 "num_base_bdevs_operational": 4, 00:16:58.362 "process": { 00:16:58.362 "type": "rebuild", 00:16:58.362 "target": "spare", 00:16:58.362 "progress": { 00:16:58.362 "blocks": 44160, 00:16:58.362 "percent": 23 00:16:58.362 } 00:16:58.362 }, 00:16:58.362 "base_bdevs_list": [ 00:16:58.362 { 00:16:58.362 "name": "spare", 00:16:58.362 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:16:58.362 "is_configured": true, 00:16:58.362 "data_offset": 2048, 00:16:58.362 "data_size": 63488 00:16:58.362 }, 00:16:58.362 { 00:16:58.362 "name": "BaseBdev2", 00:16:58.362 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:58.362 "is_configured": true, 00:16:58.362 "data_offset": 2048, 00:16:58.362 "data_size": 63488 00:16:58.362 }, 00:16:58.362 { 00:16:58.362 "name": "BaseBdev3", 00:16:58.362 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:58.362 "is_configured": true, 00:16:58.362 "data_offset": 2048, 00:16:58.362 "data_size": 63488 00:16:58.362 }, 00:16:58.362 { 00:16:58.362 "name": "BaseBdev4", 00:16:58.362 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:58.362 "is_configured": true, 00:16:58.362 "data_offset": 2048, 00:16:58.362 "data_size": 63488 00:16:58.362 } 00:16:58.362 ] 00:16:58.362 }' 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:58.362 05:55:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:59.745 "name": "raid_bdev1", 00:16:59.745 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:16:59.745 "strip_size_kb": 64, 00:16:59.745 "state": "online", 00:16:59.745 "raid_level": "raid5f", 00:16:59.745 "superblock": true, 00:16:59.745 "num_base_bdevs": 4, 00:16:59.745 "num_base_bdevs_discovered": 4, 00:16:59.745 "num_base_bdevs_operational": 4, 00:16:59.745 "process": { 00:16:59.745 "type": "rebuild", 00:16:59.745 "target": "spare", 00:16:59.745 "progress": { 00:16:59.745 "blocks": 65280, 00:16:59.745 "percent": 34 00:16:59.745 } 00:16:59.745 }, 00:16:59.745 "base_bdevs_list": [ 00:16:59.745 { 00:16:59.745 "name": "spare", 00:16:59.745 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:16:59.745 "is_configured": true, 00:16:59.745 "data_offset": 2048, 00:16:59.745 "data_size": 63488 00:16:59.745 }, 00:16:59.745 { 00:16:59.745 "name": "BaseBdev2", 00:16:59.745 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:16:59.745 "is_configured": true, 00:16:59.745 "data_offset": 2048, 00:16:59.745 "data_size": 63488 00:16:59.745 }, 00:16:59.745 { 00:16:59.745 "name": "BaseBdev3", 00:16:59.745 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:16:59.745 "is_configured": true, 00:16:59.745 "data_offset": 2048, 00:16:59.745 "data_size": 63488 00:16:59.745 }, 00:16:59.745 { 00:16:59.745 "name": "BaseBdev4", 00:16:59.745 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:16:59.745 "is_configured": true, 00:16:59.745 "data_offset": 2048, 00:16:59.745 "data_size": 63488 00:16:59.745 } 00:16:59.745 ] 00:16:59.745 }' 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:59.745 05:55:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:00.685 05:55:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:00.685 "name": "raid_bdev1", 00:17:00.685 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:00.685 "strip_size_kb": 64, 00:17:00.685 "state": "online", 00:17:00.685 "raid_level": "raid5f", 00:17:00.685 "superblock": true, 00:17:00.685 "num_base_bdevs": 4, 00:17:00.685 "num_base_bdevs_discovered": 4, 00:17:00.685 "num_base_bdevs_operational": 4, 00:17:00.685 "process": { 00:17:00.685 "type": "rebuild", 00:17:00.685 "target": "spare", 00:17:00.685 "progress": { 00:17:00.685 "blocks": 88320, 00:17:00.685 "percent": 46 00:17:00.685 } 00:17:00.685 }, 00:17:00.685 "base_bdevs_list": [ 00:17:00.685 { 00:17:00.685 "name": "spare", 00:17:00.685 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:00.685 "is_configured": true, 00:17:00.685 "data_offset": 2048, 00:17:00.685 "data_size": 63488 00:17:00.685 }, 00:17:00.685 { 00:17:00.685 "name": "BaseBdev2", 00:17:00.685 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:00.685 "is_configured": true, 00:17:00.685 "data_offset": 2048, 00:17:00.685 "data_size": 63488 00:17:00.685 }, 00:17:00.685 { 00:17:00.685 "name": "BaseBdev3", 00:17:00.685 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:00.685 "is_configured": true, 00:17:00.685 "data_offset": 2048, 00:17:00.685 "data_size": 63488 00:17:00.685 }, 00:17:00.685 { 00:17:00.685 "name": "BaseBdev4", 00:17:00.685 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:00.685 "is_configured": true, 00:17:00.685 "data_offset": 2048, 00:17:00.685 "data_size": 63488 00:17:00.685 } 00:17:00.685 ] 00:17:00.685 }' 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:00.685 05:55:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:02.066 "name": "raid_bdev1", 00:17:02.066 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:02.066 "strip_size_kb": 64, 00:17:02.066 "state": "online", 00:17:02.066 "raid_level": "raid5f", 00:17:02.066 "superblock": true, 00:17:02.066 "num_base_bdevs": 4, 00:17:02.066 "num_base_bdevs_discovered": 4, 00:17:02.066 "num_base_bdevs_operational": 4, 00:17:02.066 "process": { 00:17:02.066 "type": "rebuild", 00:17:02.066 "target": "spare", 00:17:02.066 "progress": { 00:17:02.066 "blocks": 109440, 00:17:02.066 "percent": 57 00:17:02.066 } 00:17:02.066 }, 00:17:02.066 "base_bdevs_list": [ 00:17:02.066 { 00:17:02.066 "name": "spare", 00:17:02.066 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:02.066 "is_configured": true, 00:17:02.066 "data_offset": 2048, 00:17:02.066 "data_size": 63488 00:17:02.066 }, 00:17:02.066 { 00:17:02.066 "name": "BaseBdev2", 00:17:02.066 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:02.066 "is_configured": true, 00:17:02.066 "data_offset": 2048, 00:17:02.066 "data_size": 63488 00:17:02.066 }, 00:17:02.066 { 00:17:02.066 "name": "BaseBdev3", 00:17:02.066 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:02.066 "is_configured": true, 00:17:02.066 "data_offset": 2048, 00:17:02.066 "data_size": 63488 00:17:02.066 }, 00:17:02.066 { 00:17:02.066 "name": "BaseBdev4", 00:17:02.066 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:02.066 "is_configured": true, 00:17:02.066 "data_offset": 2048, 00:17:02.066 "data_size": 63488 00:17:02.066 } 00:17:02.066 ] 00:17:02.066 }' 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:02.066 05:55:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:03.005 "name": "raid_bdev1", 00:17:03.005 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:03.005 "strip_size_kb": 64, 00:17:03.005 "state": "online", 00:17:03.005 "raid_level": "raid5f", 00:17:03.005 "superblock": true, 00:17:03.005 "num_base_bdevs": 4, 00:17:03.005 "num_base_bdevs_discovered": 4, 00:17:03.005 "num_base_bdevs_operational": 4, 00:17:03.005 "process": { 00:17:03.005 "type": "rebuild", 00:17:03.005 "target": "spare", 00:17:03.005 "progress": { 00:17:03.005 "blocks": 132480, 00:17:03.005 "percent": 69 00:17:03.005 } 00:17:03.005 }, 00:17:03.005 "base_bdevs_list": [ 00:17:03.005 { 00:17:03.005 "name": "spare", 00:17:03.005 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:03.005 "is_configured": true, 00:17:03.005 "data_offset": 2048, 00:17:03.005 "data_size": 63488 00:17:03.005 }, 00:17:03.005 { 00:17:03.005 "name": "BaseBdev2", 00:17:03.005 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:03.005 "is_configured": true, 00:17:03.005 "data_offset": 2048, 00:17:03.005 "data_size": 63488 00:17:03.005 }, 00:17:03.005 { 00:17:03.005 "name": "BaseBdev3", 00:17:03.005 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:03.005 "is_configured": true, 00:17:03.005 "data_offset": 2048, 00:17:03.005 "data_size": 63488 00:17:03.005 }, 00:17:03.005 { 00:17:03.005 "name": "BaseBdev4", 00:17:03.005 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:03.005 "is_configured": true, 00:17:03.005 "data_offset": 2048, 00:17:03.005 "data_size": 63488 00:17:03.005 } 00:17:03.005 ] 00:17:03.005 }' 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:03.005 05:55:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:03.944 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:03.944 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:03.944 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:03.944 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.204 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:04.204 "name": "raid_bdev1", 00:17:04.204 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:04.204 "strip_size_kb": 64, 00:17:04.204 "state": "online", 00:17:04.204 "raid_level": "raid5f", 00:17:04.204 "superblock": true, 00:17:04.204 "num_base_bdevs": 4, 00:17:04.204 "num_base_bdevs_discovered": 4, 00:17:04.204 "num_base_bdevs_operational": 4, 00:17:04.204 "process": { 00:17:04.204 "type": "rebuild", 00:17:04.204 "target": "spare", 00:17:04.204 "progress": { 00:17:04.204 "blocks": 153600, 00:17:04.204 "percent": 80 00:17:04.204 } 00:17:04.204 }, 00:17:04.204 "base_bdevs_list": [ 00:17:04.204 { 00:17:04.204 "name": "spare", 00:17:04.204 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.204 "name": "BaseBdev2", 00:17:04.204 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.204 "name": "BaseBdev3", 00:17:04.204 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.205 "name": "BaseBdev4", 00:17:04.205 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:04.205 "is_configured": true, 00:17:04.205 "data_offset": 2048, 00:17:04.205 "data_size": 63488 00:17:04.205 } 00:17:04.205 ] 00:17:04.205 }' 00:17:04.205 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:04.205 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:04.205 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:04.205 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:04.205 05:55:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.144 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:05.406 "name": "raid_bdev1", 00:17:05.406 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:05.406 "strip_size_kb": 64, 00:17:05.406 "state": "online", 00:17:05.406 "raid_level": "raid5f", 00:17:05.406 "superblock": true, 00:17:05.406 "num_base_bdevs": 4, 00:17:05.406 "num_base_bdevs_discovered": 4, 00:17:05.406 "num_base_bdevs_operational": 4, 00:17:05.406 "process": { 00:17:05.406 "type": "rebuild", 00:17:05.406 "target": "spare", 00:17:05.406 "progress": { 00:17:05.406 "blocks": 176640, 00:17:05.406 "percent": 92 00:17:05.406 } 00:17:05.406 }, 00:17:05.406 "base_bdevs_list": [ 00:17:05.406 { 00:17:05.406 "name": "spare", 00:17:05.406 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev2", 00:17:05.406 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev3", 00:17:05.406 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 }, 00:17:05.406 { 00:17:05.406 "name": "BaseBdev4", 00:17:05.406 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:05.406 "is_configured": true, 00:17:05.406 "data_offset": 2048, 00:17:05.406 "data_size": 63488 00:17:05.406 } 00:17:05.406 ] 00:17:05.406 }' 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:05.406 05:55:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:05.975 [2024-12-12 05:55:13.404465] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:05.975 [2024-12-12 05:55:13.404541] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:05.975 [2024-12-12 05:55:13.404670] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.544 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.544 "name": "raid_bdev1", 00:17:06.545 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:06.545 "strip_size_kb": 64, 00:17:06.545 "state": "online", 00:17:06.545 "raid_level": "raid5f", 00:17:06.545 "superblock": true, 00:17:06.545 "num_base_bdevs": 4, 00:17:06.545 "num_base_bdevs_discovered": 4, 00:17:06.545 "num_base_bdevs_operational": 4, 00:17:06.545 "base_bdevs_list": [ 00:17:06.545 { 00:17:06.545 "name": "spare", 00:17:06.545 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev2", 00:17:06.545 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev3", 00:17:06.545 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev4", 00:17:06.545 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 } 00:17:06.545 ] 00:17:06.545 }' 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:06.545 "name": "raid_bdev1", 00:17:06.545 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:06.545 "strip_size_kb": 64, 00:17:06.545 "state": "online", 00:17:06.545 "raid_level": "raid5f", 00:17:06.545 "superblock": true, 00:17:06.545 "num_base_bdevs": 4, 00:17:06.545 "num_base_bdevs_discovered": 4, 00:17:06.545 "num_base_bdevs_operational": 4, 00:17:06.545 "base_bdevs_list": [ 00:17:06.545 { 00:17:06.545 "name": "spare", 00:17:06.545 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev2", 00:17:06.545 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev3", 00:17:06.545 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 }, 00:17:06.545 { 00:17:06.545 "name": "BaseBdev4", 00:17:06.545 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:06.545 "is_configured": true, 00:17:06.545 "data_offset": 2048, 00:17:06.545 "data_size": 63488 00:17:06.545 } 00:17:06.545 ] 00:17:06.545 }' 00:17:06.545 05:55:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.545 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.805 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.805 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.805 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.805 "name": "raid_bdev1", 00:17:06.805 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:06.805 "strip_size_kb": 64, 00:17:06.805 "state": "online", 00:17:06.805 "raid_level": "raid5f", 00:17:06.805 "superblock": true, 00:17:06.805 "num_base_bdevs": 4, 00:17:06.805 "num_base_bdevs_discovered": 4, 00:17:06.805 "num_base_bdevs_operational": 4, 00:17:06.805 "base_bdevs_list": [ 00:17:06.805 { 00:17:06.805 "name": "spare", 00:17:06.805 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:06.805 "is_configured": true, 00:17:06.805 "data_offset": 2048, 00:17:06.805 "data_size": 63488 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "name": "BaseBdev2", 00:17:06.805 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:06.805 "is_configured": true, 00:17:06.805 "data_offset": 2048, 00:17:06.805 "data_size": 63488 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "name": "BaseBdev3", 00:17:06.805 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:06.805 "is_configured": true, 00:17:06.805 "data_offset": 2048, 00:17:06.805 "data_size": 63488 00:17:06.805 }, 00:17:06.805 { 00:17:06.805 "name": "BaseBdev4", 00:17:06.805 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:06.805 "is_configured": true, 00:17:06.805 "data_offset": 2048, 00:17:06.805 "data_size": 63488 00:17:06.805 } 00:17:06.805 ] 00:17:06.805 }' 00:17:06.805 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.805 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.065 [2024-12-12 05:55:14.551841] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:07.065 [2024-12-12 05:55:14.551871] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:07.065 [2024-12-12 05:55:14.551944] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:07.065 [2024-12-12 05:55:14.552033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:07.065 [2024-12-12 05:55:14.552054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.065 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:07.325 /dev/nbd0 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.325 1+0 records in 00:17:07.325 1+0 records out 00:17:07.325 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563068 s, 7.3 MB/s 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:07.325 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.585 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.585 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:07.585 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.585 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.585 05:55:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:07.585 /dev/nbd1 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:07.585 1+0 records in 00:17:07.585 1+0 records out 00:17:07.585 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000606354 s, 6.8 MB/s 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:07.585 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:07.844 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:08.102 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.362 [2024-12-12 05:55:15.714615] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:08.362 [2024-12-12 05:55:15.714726] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.362 [2024-12-12 05:55:15.714753] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:08.362 [2024-12-12 05:55:15.714762] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.362 [2024-12-12 05:55:15.716953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.362 [2024-12-12 05:55:15.717003] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:08.362 [2024-12-12 05:55:15.717083] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:08.362 [2024-12-12 05:55:15.717130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:08.362 [2024-12-12 05:55:15.717255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.362 [2024-12-12 05:55:15.717352] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:08.362 [2024-12-12 05:55:15.717445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:08.362 spare 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.362 [2024-12-12 05:55:15.817359] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:08.362 [2024-12-12 05:55:15.817430] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:08.362 [2024-12-12 05:55:15.817720] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:08.362 [2024-12-12 05:55:15.824442] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:08.362 [2024-12-12 05:55:15.824536] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:08.362 [2024-12-12 05:55:15.824762] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.362 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.622 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.622 "name": "raid_bdev1", 00:17:08.622 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:08.622 "strip_size_kb": 64, 00:17:08.622 "state": "online", 00:17:08.622 "raid_level": "raid5f", 00:17:08.622 "superblock": true, 00:17:08.622 "num_base_bdevs": 4, 00:17:08.622 "num_base_bdevs_discovered": 4, 00:17:08.622 "num_base_bdevs_operational": 4, 00:17:08.622 "base_bdevs_list": [ 00:17:08.622 { 00:17:08.622 "name": "spare", 00:17:08.622 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:08.622 "is_configured": true, 00:17:08.622 "data_offset": 2048, 00:17:08.622 "data_size": 63488 00:17:08.622 }, 00:17:08.622 { 00:17:08.622 "name": "BaseBdev2", 00:17:08.622 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:08.622 "is_configured": true, 00:17:08.622 "data_offset": 2048, 00:17:08.622 "data_size": 63488 00:17:08.622 }, 00:17:08.622 { 00:17:08.622 "name": "BaseBdev3", 00:17:08.622 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:08.622 "is_configured": true, 00:17:08.622 "data_offset": 2048, 00:17:08.622 "data_size": 63488 00:17:08.622 }, 00:17:08.622 { 00:17:08.622 "name": "BaseBdev4", 00:17:08.622 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:08.622 "is_configured": true, 00:17:08.622 "data_offset": 2048, 00:17:08.622 "data_size": 63488 00:17:08.622 } 00:17:08.622 ] 00:17:08.622 }' 00:17:08.622 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.622 05:55:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:08.882 "name": "raid_bdev1", 00:17:08.882 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:08.882 "strip_size_kb": 64, 00:17:08.882 "state": "online", 00:17:08.882 "raid_level": "raid5f", 00:17:08.882 "superblock": true, 00:17:08.882 "num_base_bdevs": 4, 00:17:08.882 "num_base_bdevs_discovered": 4, 00:17:08.882 "num_base_bdevs_operational": 4, 00:17:08.882 "base_bdevs_list": [ 00:17:08.882 { 00:17:08.882 "name": "spare", 00:17:08.882 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:08.882 "is_configured": true, 00:17:08.882 "data_offset": 2048, 00:17:08.882 "data_size": 63488 00:17:08.882 }, 00:17:08.882 { 00:17:08.882 "name": "BaseBdev2", 00:17:08.882 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:08.882 "is_configured": true, 00:17:08.882 "data_offset": 2048, 00:17:08.882 "data_size": 63488 00:17:08.882 }, 00:17:08.882 { 00:17:08.882 "name": "BaseBdev3", 00:17:08.882 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:08.882 "is_configured": true, 00:17:08.882 "data_offset": 2048, 00:17:08.882 "data_size": 63488 00:17:08.882 }, 00:17:08.882 { 00:17:08.882 "name": "BaseBdev4", 00:17:08.882 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:08.882 "is_configured": true, 00:17:08.882 "data_offset": 2048, 00:17:08.882 "data_size": 63488 00:17:08.882 } 00:17:08.882 ] 00:17:08.882 }' 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:08.882 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.142 [2024-12-12 05:55:16.467809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.142 "name": "raid_bdev1", 00:17:09.142 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:09.142 "strip_size_kb": 64, 00:17:09.142 "state": "online", 00:17:09.142 "raid_level": "raid5f", 00:17:09.142 "superblock": true, 00:17:09.142 "num_base_bdevs": 4, 00:17:09.142 "num_base_bdevs_discovered": 3, 00:17:09.142 "num_base_bdevs_operational": 3, 00:17:09.142 "base_bdevs_list": [ 00:17:09.142 { 00:17:09.142 "name": null, 00:17:09.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.142 "is_configured": false, 00:17:09.142 "data_offset": 0, 00:17:09.142 "data_size": 63488 00:17:09.142 }, 00:17:09.142 { 00:17:09.142 "name": "BaseBdev2", 00:17:09.142 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:09.142 "is_configured": true, 00:17:09.142 "data_offset": 2048, 00:17:09.142 "data_size": 63488 00:17:09.142 }, 00:17:09.142 { 00:17:09.142 "name": "BaseBdev3", 00:17:09.142 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:09.142 "is_configured": true, 00:17:09.142 "data_offset": 2048, 00:17:09.142 "data_size": 63488 00:17:09.142 }, 00:17:09.142 { 00:17:09.142 "name": "BaseBdev4", 00:17:09.142 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:09.142 "is_configured": true, 00:17:09.142 "data_offset": 2048, 00:17:09.142 "data_size": 63488 00:17:09.142 } 00:17:09.142 ] 00:17:09.142 }' 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.142 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.402 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:09.402 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.402 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.402 [2024-12-12 05:55:16.859151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.402 [2024-12-12 05:55:16.859376] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:09.402 [2024-12-12 05:55:16.859447] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:09.402 [2024-12-12 05:55:16.859544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:09.402 [2024-12-12 05:55:16.874005] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:09.402 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.402 05:55:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:09.402 [2024-12-12 05:55:16.882305] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:10.783 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:10.784 "name": "raid_bdev1", 00:17:10.784 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:10.784 "strip_size_kb": 64, 00:17:10.784 "state": "online", 00:17:10.784 "raid_level": "raid5f", 00:17:10.784 "superblock": true, 00:17:10.784 "num_base_bdevs": 4, 00:17:10.784 "num_base_bdevs_discovered": 4, 00:17:10.784 "num_base_bdevs_operational": 4, 00:17:10.784 "process": { 00:17:10.784 "type": "rebuild", 00:17:10.784 "target": "spare", 00:17:10.784 "progress": { 00:17:10.784 "blocks": 19200, 00:17:10.784 "percent": 10 00:17:10.784 } 00:17:10.784 }, 00:17:10.784 "base_bdevs_list": [ 00:17:10.784 { 00:17:10.784 "name": "spare", 00:17:10.784 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev2", 00:17:10.784 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev3", 00:17:10.784 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev4", 00:17:10.784 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 } 00:17:10.784 ] 00:17:10.784 }' 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:10.784 05:55:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.784 [2024-12-12 05:55:18.041074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.784 [2024-12-12 05:55:18.088016] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:10.784 [2024-12-12 05:55:18.088144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.784 [2024-12-12 05:55:18.088163] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:10.784 [2024-12-12 05:55:18.088172] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.784 "name": "raid_bdev1", 00:17:10.784 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:10.784 "strip_size_kb": 64, 00:17:10.784 "state": "online", 00:17:10.784 "raid_level": "raid5f", 00:17:10.784 "superblock": true, 00:17:10.784 "num_base_bdevs": 4, 00:17:10.784 "num_base_bdevs_discovered": 3, 00:17:10.784 "num_base_bdevs_operational": 3, 00:17:10.784 "base_bdevs_list": [ 00:17:10.784 { 00:17:10.784 "name": null, 00:17:10.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.784 "is_configured": false, 00:17:10.784 "data_offset": 0, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev2", 00:17:10.784 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev3", 00:17:10.784 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 }, 00:17:10.784 { 00:17:10.784 "name": "BaseBdev4", 00:17:10.784 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:10.784 "is_configured": true, 00:17:10.784 "data_offset": 2048, 00:17:10.784 "data_size": 63488 00:17:10.784 } 00:17:10.784 ] 00:17:10.784 }' 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.784 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.047 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:11.047 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.047 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.047 [2024-12-12 05:55:18.516003] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:11.047 [2024-12-12 05:55:18.516104] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:11.047 [2024-12-12 05:55:18.516180] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:17:11.047 [2024-12-12 05:55:18.516220] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:11.047 [2024-12-12 05:55:18.516738] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:11.047 [2024-12-12 05:55:18.516810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:11.047 [2024-12-12 05:55:18.516953] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:11.047 [2024-12-12 05:55:18.517001] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:11.047 [2024-12-12 05:55:18.517050] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:11.047 [2024-12-12 05:55:18.517128] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:11.047 [2024-12-12 05:55:18.531151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:17:11.047 spare 00:17:11.047 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.047 05:55:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:11.047 [2024-12-12 05:55:18.539546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.429 "name": "raid_bdev1", 00:17:12.429 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:12.429 "strip_size_kb": 64, 00:17:12.429 "state": "online", 00:17:12.429 "raid_level": "raid5f", 00:17:12.429 "superblock": true, 00:17:12.429 "num_base_bdevs": 4, 00:17:12.429 "num_base_bdevs_discovered": 4, 00:17:12.429 "num_base_bdevs_operational": 4, 00:17:12.429 "process": { 00:17:12.429 "type": "rebuild", 00:17:12.429 "target": "spare", 00:17:12.429 "progress": { 00:17:12.429 "blocks": 19200, 00:17:12.429 "percent": 10 00:17:12.429 } 00:17:12.429 }, 00:17:12.429 "base_bdevs_list": [ 00:17:12.429 { 00:17:12.429 "name": "spare", 00:17:12.429 "uuid": "929fb53b-37f4-5f36-a697-42f95c4a07e6", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev2", 00:17:12.429 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev3", 00:17:12.429 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev4", 00:17:12.429 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 } 00:17:12.429 ] 00:17:12.429 }' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.429 [2024-12-12 05:55:19.698313] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.429 [2024-12-12 05:55:19.745181] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:12.429 [2024-12-12 05:55:19.745230] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.429 [2024-12-12 05:55:19.745248] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:12.429 [2024-12-12 05:55:19.745255] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.429 "name": "raid_bdev1", 00:17:12.429 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:12.429 "strip_size_kb": 64, 00:17:12.429 "state": "online", 00:17:12.429 "raid_level": "raid5f", 00:17:12.429 "superblock": true, 00:17:12.429 "num_base_bdevs": 4, 00:17:12.429 "num_base_bdevs_discovered": 3, 00:17:12.429 "num_base_bdevs_operational": 3, 00:17:12.429 "base_bdevs_list": [ 00:17:12.429 { 00:17:12.429 "name": null, 00:17:12.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.429 "is_configured": false, 00:17:12.429 "data_offset": 0, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev2", 00:17:12.429 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev3", 00:17:12.429 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 }, 00:17:12.429 { 00:17:12.429 "name": "BaseBdev4", 00:17:12.429 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:12.429 "is_configured": true, 00:17:12.429 "data_offset": 2048, 00:17:12.429 "data_size": 63488 00:17:12.429 } 00:17:12.429 ] 00:17:12.429 }' 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.429 05:55:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.689 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:12.689 "name": "raid_bdev1", 00:17:12.689 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:12.689 "strip_size_kb": 64, 00:17:12.689 "state": "online", 00:17:12.689 "raid_level": "raid5f", 00:17:12.689 "superblock": true, 00:17:12.689 "num_base_bdevs": 4, 00:17:12.689 "num_base_bdevs_discovered": 3, 00:17:12.689 "num_base_bdevs_operational": 3, 00:17:12.689 "base_bdevs_list": [ 00:17:12.689 { 00:17:12.689 "name": null, 00:17:12.689 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.689 "is_configured": false, 00:17:12.689 "data_offset": 0, 00:17:12.689 "data_size": 63488 00:17:12.689 }, 00:17:12.689 { 00:17:12.689 "name": "BaseBdev2", 00:17:12.689 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:12.689 "is_configured": true, 00:17:12.689 "data_offset": 2048, 00:17:12.689 "data_size": 63488 00:17:12.689 }, 00:17:12.689 { 00:17:12.689 "name": "BaseBdev3", 00:17:12.689 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:12.689 "is_configured": true, 00:17:12.689 "data_offset": 2048, 00:17:12.689 "data_size": 63488 00:17:12.689 }, 00:17:12.689 { 00:17:12.689 "name": "BaseBdev4", 00:17:12.689 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:12.689 "is_configured": true, 00:17:12.689 "data_offset": 2048, 00:17:12.689 "data_size": 63488 00:17:12.689 } 00:17:12.689 ] 00:17:12.689 }' 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.949 [2024-12-12 05:55:20.317408] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:12.949 [2024-12-12 05:55:20.317527] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.949 [2024-12-12 05:55:20.317555] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:17:12.949 [2024-12-12 05:55:20.317564] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.949 [2024-12-12 05:55:20.318006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.949 [2024-12-12 05:55:20.318025] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:12.949 [2024-12-12 05:55:20.318098] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:12.949 [2024-12-12 05:55:20.318113] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:12.949 [2024-12-12 05:55:20.318125] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:12.949 [2024-12-12 05:55:20.318134] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:12.949 BaseBdev1 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.949 05:55:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.889 "name": "raid_bdev1", 00:17:13.889 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:13.889 "strip_size_kb": 64, 00:17:13.889 "state": "online", 00:17:13.889 "raid_level": "raid5f", 00:17:13.889 "superblock": true, 00:17:13.889 "num_base_bdevs": 4, 00:17:13.889 "num_base_bdevs_discovered": 3, 00:17:13.889 "num_base_bdevs_operational": 3, 00:17:13.889 "base_bdevs_list": [ 00:17:13.889 { 00:17:13.889 "name": null, 00:17:13.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.889 "is_configured": false, 00:17:13.889 "data_offset": 0, 00:17:13.889 "data_size": 63488 00:17:13.889 }, 00:17:13.889 { 00:17:13.889 "name": "BaseBdev2", 00:17:13.889 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:13.889 "is_configured": true, 00:17:13.889 "data_offset": 2048, 00:17:13.889 "data_size": 63488 00:17:13.889 }, 00:17:13.889 { 00:17:13.889 "name": "BaseBdev3", 00:17:13.889 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:13.889 "is_configured": true, 00:17:13.889 "data_offset": 2048, 00:17:13.889 "data_size": 63488 00:17:13.889 }, 00:17:13.889 { 00:17:13.889 "name": "BaseBdev4", 00:17:13.889 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:13.889 "is_configured": true, 00:17:13.889 "data_offset": 2048, 00:17:13.889 "data_size": 63488 00:17:13.889 } 00:17:13.889 ] 00:17:13.889 }' 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.889 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.459 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:14.459 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:14.460 "name": "raid_bdev1", 00:17:14.460 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:14.460 "strip_size_kb": 64, 00:17:14.460 "state": "online", 00:17:14.460 "raid_level": "raid5f", 00:17:14.460 "superblock": true, 00:17:14.460 "num_base_bdevs": 4, 00:17:14.460 "num_base_bdevs_discovered": 3, 00:17:14.460 "num_base_bdevs_operational": 3, 00:17:14.460 "base_bdevs_list": [ 00:17:14.460 { 00:17:14.460 "name": null, 00:17:14.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.460 "is_configured": false, 00:17:14.460 "data_offset": 0, 00:17:14.460 "data_size": 63488 00:17:14.460 }, 00:17:14.460 { 00:17:14.460 "name": "BaseBdev2", 00:17:14.460 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:14.460 "is_configured": true, 00:17:14.460 "data_offset": 2048, 00:17:14.460 "data_size": 63488 00:17:14.460 }, 00:17:14.460 { 00:17:14.460 "name": "BaseBdev3", 00:17:14.460 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:14.460 "is_configured": true, 00:17:14.460 "data_offset": 2048, 00:17:14.460 "data_size": 63488 00:17:14.460 }, 00:17:14.460 { 00:17:14.460 "name": "BaseBdev4", 00:17:14.460 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:14.460 "is_configured": true, 00:17:14.460 "data_offset": 2048, 00:17:14.460 "data_size": 63488 00:17:14.460 } 00:17:14.460 ] 00:17:14.460 }' 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.460 [2024-12-12 05:55:21.926702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.460 [2024-12-12 05:55:21.926847] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:14.460 [2024-12-12 05:55:21.926862] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:14.460 request: 00:17:14.460 { 00:17:14.460 "base_bdev": "BaseBdev1", 00:17:14.460 "raid_bdev": "raid_bdev1", 00:17:14.460 "method": "bdev_raid_add_base_bdev", 00:17:14.460 "req_id": 1 00:17:14.460 } 00:17:14.460 Got JSON-RPC error response 00:17:14.460 response: 00:17:14.460 { 00:17:14.460 "code": -22, 00:17:14.460 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:14.460 } 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:14.460 05:55:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.842 "name": "raid_bdev1", 00:17:15.842 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:15.842 "strip_size_kb": 64, 00:17:15.842 "state": "online", 00:17:15.842 "raid_level": "raid5f", 00:17:15.842 "superblock": true, 00:17:15.842 "num_base_bdevs": 4, 00:17:15.842 "num_base_bdevs_discovered": 3, 00:17:15.842 "num_base_bdevs_operational": 3, 00:17:15.842 "base_bdevs_list": [ 00:17:15.842 { 00:17:15.842 "name": null, 00:17:15.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.842 "is_configured": false, 00:17:15.842 "data_offset": 0, 00:17:15.842 "data_size": 63488 00:17:15.842 }, 00:17:15.842 { 00:17:15.842 "name": "BaseBdev2", 00:17:15.842 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:15.842 "is_configured": true, 00:17:15.842 "data_offset": 2048, 00:17:15.842 "data_size": 63488 00:17:15.842 }, 00:17:15.842 { 00:17:15.842 "name": "BaseBdev3", 00:17:15.842 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:15.842 "is_configured": true, 00:17:15.842 "data_offset": 2048, 00:17:15.842 "data_size": 63488 00:17:15.842 }, 00:17:15.842 { 00:17:15.842 "name": "BaseBdev4", 00:17:15.842 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:15.842 "is_configured": true, 00:17:15.842 "data_offset": 2048, 00:17:15.842 "data_size": 63488 00:17:15.842 } 00:17:15.842 ] 00:17:15.842 }' 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.842 05:55:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.102 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:16.102 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:16.102 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:16.103 "name": "raid_bdev1", 00:17:16.103 "uuid": "79eea3e9-701a-4fd4-8756-e8c1af137dd3", 00:17:16.103 "strip_size_kb": 64, 00:17:16.103 "state": "online", 00:17:16.103 "raid_level": "raid5f", 00:17:16.103 "superblock": true, 00:17:16.103 "num_base_bdevs": 4, 00:17:16.103 "num_base_bdevs_discovered": 3, 00:17:16.103 "num_base_bdevs_operational": 3, 00:17:16.103 "base_bdevs_list": [ 00:17:16.103 { 00:17:16.103 "name": null, 00:17:16.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.103 "is_configured": false, 00:17:16.103 "data_offset": 0, 00:17:16.103 "data_size": 63488 00:17:16.103 }, 00:17:16.103 { 00:17:16.103 "name": "BaseBdev2", 00:17:16.103 "uuid": "1dfbdbca-9ae2-57be-b8b0-b2b1681e281e", 00:17:16.103 "is_configured": true, 00:17:16.103 "data_offset": 2048, 00:17:16.103 "data_size": 63488 00:17:16.103 }, 00:17:16.103 { 00:17:16.103 "name": "BaseBdev3", 00:17:16.103 "uuid": "0774d3ae-6653-5e30-80c1-75ca40d864d3", 00:17:16.103 "is_configured": true, 00:17:16.103 "data_offset": 2048, 00:17:16.103 "data_size": 63488 00:17:16.103 }, 00:17:16.103 { 00:17:16.103 "name": "BaseBdev4", 00:17:16.103 "uuid": "d6043512-11be-5451-a001-eec6afdb5207", 00:17:16.103 "is_configured": true, 00:17:16.103 "data_offset": 2048, 00:17:16.103 "data_size": 63488 00:17:16.103 } 00:17:16.103 ] 00:17:16.103 }' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 84790 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 84790 ']' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 84790 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84790 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84790' 00:17:16.103 killing process with pid 84790 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 84790 00:17:16.103 Received shutdown signal, test time was about 60.000000 seconds 00:17:16.103 00:17:16.103 Latency(us) 00:17:16.103 [2024-12-12T05:55:23.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.103 [2024-12-12T05:55:23.625Z] =================================================================================================================== 00:17:16.103 [2024-12-12T05:55:23.625Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:16.103 [2024-12-12 05:55:23.579807] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:16.103 [2024-12-12 05:55:23.579955] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:16.103 05:55:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 84790 00:17:16.103 [2024-12-12 05:55:23.580029] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:16.103 [2024-12-12 05:55:23.580042] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:16.673 [2024-12-12 05:55:24.035454] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:17.613 05:55:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:17:17.613 00:17:17.613 real 0m26.676s 00:17:17.613 user 0m33.483s 00:17:17.613 sys 0m2.948s 00:17:17.613 05:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:17.613 05:55:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.613 ************************************ 00:17:17.613 END TEST raid5f_rebuild_test_sb 00:17:17.613 ************************************ 00:17:17.613 05:55:25 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:17:17.613 05:55:25 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:17.613 05:55:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:17.613 05:55:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:17.613 05:55:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:17.872 ************************************ 00:17:17.872 START TEST raid_state_function_test_sb_4k 00:17:17.872 ************************************ 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=85445 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 85445' 00:17:17.872 Process raid pid: 85445 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 85445 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85445 ']' 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.872 05:55:25 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:17.872 [2024-12-12 05:55:25.242378] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:17:17.872 [2024-12-12 05:55:25.242509] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.132 [2024-12-12 05:55:25.415911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.132 [2024-12-12 05:55:25.515445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.391 [2024-12-12 05:55:25.719639] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.391 [2024-12-12 05:55:25.719673] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.651 [2024-12-12 05:55:26.051728] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:18.651 [2024-12-12 05:55:26.051843] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:18.651 [2024-12-12 05:55:26.051858] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:18.651 [2024-12-12 05:55:26.051895] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.651 "name": "Existed_Raid", 00:17:18.651 "uuid": "37e1f8d5-b783-4af2-b01c-7aa6e6e88f05", 00:17:18.651 "strip_size_kb": 0, 00:17:18.651 "state": "configuring", 00:17:18.651 "raid_level": "raid1", 00:17:18.651 "superblock": true, 00:17:18.651 "num_base_bdevs": 2, 00:17:18.651 "num_base_bdevs_discovered": 0, 00:17:18.651 "num_base_bdevs_operational": 2, 00:17:18.651 "base_bdevs_list": [ 00:17:18.651 { 00:17:18.651 "name": "BaseBdev1", 00:17:18.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.651 "is_configured": false, 00:17:18.651 "data_offset": 0, 00:17:18.651 "data_size": 0 00:17:18.651 }, 00:17:18.651 { 00:17:18.651 "name": "BaseBdev2", 00:17:18.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.651 "is_configured": false, 00:17:18.651 "data_offset": 0, 00:17:18.651 "data_size": 0 00:17:18.651 } 00:17:18.651 ] 00:17:18.651 }' 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.651 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 [2024-12-12 05:55:26.478947] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.221 [2024-12-12 05:55:26.479025] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 [2024-12-12 05:55:26.486936] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:19.221 [2024-12-12 05:55:26.487014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:19.221 [2024-12-12 05:55:26.487041] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.221 [2024-12-12 05:55:26.487065] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 [2024-12-12 05:55:26.532575] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.221 BaseBdev1 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.221 [ 00:17:19.221 { 00:17:19.221 "name": "BaseBdev1", 00:17:19.221 "aliases": [ 00:17:19.221 "64a4b392-30e1-4504-ac77-af1d7b52e8f3" 00:17:19.221 ], 00:17:19.221 "product_name": "Malloc disk", 00:17:19.221 "block_size": 4096, 00:17:19.221 "num_blocks": 8192, 00:17:19.221 "uuid": "64a4b392-30e1-4504-ac77-af1d7b52e8f3", 00:17:19.221 "assigned_rate_limits": { 00:17:19.221 "rw_ios_per_sec": 0, 00:17:19.221 "rw_mbytes_per_sec": 0, 00:17:19.221 "r_mbytes_per_sec": 0, 00:17:19.221 "w_mbytes_per_sec": 0 00:17:19.221 }, 00:17:19.221 "claimed": true, 00:17:19.221 "claim_type": "exclusive_write", 00:17:19.221 "zoned": false, 00:17:19.221 "supported_io_types": { 00:17:19.221 "read": true, 00:17:19.221 "write": true, 00:17:19.221 "unmap": true, 00:17:19.221 "flush": true, 00:17:19.221 "reset": true, 00:17:19.221 "nvme_admin": false, 00:17:19.221 "nvme_io": false, 00:17:19.221 "nvme_io_md": false, 00:17:19.221 "write_zeroes": true, 00:17:19.221 "zcopy": true, 00:17:19.221 "get_zone_info": false, 00:17:19.221 "zone_management": false, 00:17:19.221 "zone_append": false, 00:17:19.221 "compare": false, 00:17:19.221 "compare_and_write": false, 00:17:19.221 "abort": true, 00:17:19.221 "seek_hole": false, 00:17:19.221 "seek_data": false, 00:17:19.221 "copy": true, 00:17:19.221 "nvme_iov_md": false 00:17:19.221 }, 00:17:19.221 "memory_domains": [ 00:17:19.221 { 00:17:19.221 "dma_device_id": "system", 00:17:19.221 "dma_device_type": 1 00:17:19.221 }, 00:17:19.221 { 00:17:19.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.221 "dma_device_type": 2 00:17:19.221 } 00:17:19.221 ], 00:17:19.221 "driver_specific": {} 00:17:19.221 } 00:17:19.221 ] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.221 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.222 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.222 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.222 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.222 "name": "Existed_Raid", 00:17:19.222 "uuid": "8a3ce391-3ebd-4b4d-92d8-cf842e2a3fca", 00:17:19.222 "strip_size_kb": 0, 00:17:19.222 "state": "configuring", 00:17:19.222 "raid_level": "raid1", 00:17:19.222 "superblock": true, 00:17:19.222 "num_base_bdevs": 2, 00:17:19.222 "num_base_bdevs_discovered": 1, 00:17:19.222 "num_base_bdevs_operational": 2, 00:17:19.222 "base_bdevs_list": [ 00:17:19.222 { 00:17:19.222 "name": "BaseBdev1", 00:17:19.222 "uuid": "64a4b392-30e1-4504-ac77-af1d7b52e8f3", 00:17:19.222 "is_configured": true, 00:17:19.222 "data_offset": 256, 00:17:19.222 "data_size": 7936 00:17:19.222 }, 00:17:19.222 { 00:17:19.222 "name": "BaseBdev2", 00:17:19.222 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.222 "is_configured": false, 00:17:19.222 "data_offset": 0, 00:17:19.222 "data_size": 0 00:17:19.222 } 00:17:19.222 ] 00:17:19.222 }' 00:17:19.222 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.222 05:55:26 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.791 [2024-12-12 05:55:27.039694] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:19.791 [2024-12-12 05:55:27.039730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.791 [2024-12-12 05:55:27.047729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:19.791 [2024-12-12 05:55:27.049430] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:19.791 [2024-12-12 05:55:27.049467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.791 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.792 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:19.792 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.792 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.792 "name": "Existed_Raid", 00:17:19.792 "uuid": "880566c5-19c5-43d2-a4ff-177243097b09", 00:17:19.792 "strip_size_kb": 0, 00:17:19.792 "state": "configuring", 00:17:19.792 "raid_level": "raid1", 00:17:19.792 "superblock": true, 00:17:19.792 "num_base_bdevs": 2, 00:17:19.792 "num_base_bdevs_discovered": 1, 00:17:19.792 "num_base_bdevs_operational": 2, 00:17:19.792 "base_bdevs_list": [ 00:17:19.792 { 00:17:19.792 "name": "BaseBdev1", 00:17:19.792 "uuid": "64a4b392-30e1-4504-ac77-af1d7b52e8f3", 00:17:19.792 "is_configured": true, 00:17:19.792 "data_offset": 256, 00:17:19.792 "data_size": 7936 00:17:19.792 }, 00:17:19.792 { 00:17:19.792 "name": "BaseBdev2", 00:17:19.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:19.792 "is_configured": false, 00:17:19.792 "data_offset": 0, 00:17:19.792 "data_size": 0 00:17:19.792 } 00:17:19.792 ] 00:17:19.792 }' 00:17:19.792 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.792 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.052 [2024-12-12 05:55:27.563744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:20.052 [2024-12-12 05:55:27.564007] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:20.052 [2024-12-12 05:55:27.564023] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:20.052 [2024-12-12 05:55:27.564263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:20.052 BaseBdev2 00:17:20.052 [2024-12-12 05:55:27.564428] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:20.052 [2024-12-12 05:55:27.564465] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:20.052 [2024-12-12 05:55:27.564644] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.052 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.312 [ 00:17:20.312 { 00:17:20.312 "name": "BaseBdev2", 00:17:20.312 "aliases": [ 00:17:20.312 "0ed707fa-e36d-4b08-9b61-a2234c9496e8" 00:17:20.312 ], 00:17:20.312 "product_name": "Malloc disk", 00:17:20.312 "block_size": 4096, 00:17:20.312 "num_blocks": 8192, 00:17:20.312 "uuid": "0ed707fa-e36d-4b08-9b61-a2234c9496e8", 00:17:20.312 "assigned_rate_limits": { 00:17:20.312 "rw_ios_per_sec": 0, 00:17:20.312 "rw_mbytes_per_sec": 0, 00:17:20.312 "r_mbytes_per_sec": 0, 00:17:20.312 "w_mbytes_per_sec": 0 00:17:20.312 }, 00:17:20.312 "claimed": true, 00:17:20.312 "claim_type": "exclusive_write", 00:17:20.312 "zoned": false, 00:17:20.312 "supported_io_types": { 00:17:20.312 "read": true, 00:17:20.312 "write": true, 00:17:20.312 "unmap": true, 00:17:20.312 "flush": true, 00:17:20.312 "reset": true, 00:17:20.312 "nvme_admin": false, 00:17:20.312 "nvme_io": false, 00:17:20.312 "nvme_io_md": false, 00:17:20.312 "write_zeroes": true, 00:17:20.312 "zcopy": true, 00:17:20.312 "get_zone_info": false, 00:17:20.312 "zone_management": false, 00:17:20.312 "zone_append": false, 00:17:20.312 "compare": false, 00:17:20.312 "compare_and_write": false, 00:17:20.312 "abort": true, 00:17:20.312 "seek_hole": false, 00:17:20.312 "seek_data": false, 00:17:20.312 "copy": true, 00:17:20.312 "nvme_iov_md": false 00:17:20.312 }, 00:17:20.312 "memory_domains": [ 00:17:20.312 { 00:17:20.312 "dma_device_id": "system", 00:17:20.312 "dma_device_type": 1 00:17:20.312 }, 00:17:20.312 { 00:17:20.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.312 "dma_device_type": 2 00:17:20.312 } 00:17:20.312 ], 00:17:20.312 "driver_specific": {} 00:17:20.312 } 00:17:20.312 ] 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.312 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.313 "name": "Existed_Raid", 00:17:20.313 "uuid": "880566c5-19c5-43d2-a4ff-177243097b09", 00:17:20.313 "strip_size_kb": 0, 00:17:20.313 "state": "online", 00:17:20.313 "raid_level": "raid1", 00:17:20.313 "superblock": true, 00:17:20.313 "num_base_bdevs": 2, 00:17:20.313 "num_base_bdevs_discovered": 2, 00:17:20.313 "num_base_bdevs_operational": 2, 00:17:20.313 "base_bdevs_list": [ 00:17:20.313 { 00:17:20.313 "name": "BaseBdev1", 00:17:20.313 "uuid": "64a4b392-30e1-4504-ac77-af1d7b52e8f3", 00:17:20.313 "is_configured": true, 00:17:20.313 "data_offset": 256, 00:17:20.313 "data_size": 7936 00:17:20.313 }, 00:17:20.313 { 00:17:20.313 "name": "BaseBdev2", 00:17:20.313 "uuid": "0ed707fa-e36d-4b08-9b61-a2234c9496e8", 00:17:20.313 "is_configured": true, 00:17:20.313 "data_offset": 256, 00:17:20.313 "data_size": 7936 00:17:20.313 } 00:17:20.313 ] 00:17:20.313 }' 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.313 05:55:27 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.573 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.573 [2024-12-12 05:55:28.075098] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:20.833 "name": "Existed_Raid", 00:17:20.833 "aliases": [ 00:17:20.833 "880566c5-19c5-43d2-a4ff-177243097b09" 00:17:20.833 ], 00:17:20.833 "product_name": "Raid Volume", 00:17:20.833 "block_size": 4096, 00:17:20.833 "num_blocks": 7936, 00:17:20.833 "uuid": "880566c5-19c5-43d2-a4ff-177243097b09", 00:17:20.833 "assigned_rate_limits": { 00:17:20.833 "rw_ios_per_sec": 0, 00:17:20.833 "rw_mbytes_per_sec": 0, 00:17:20.833 "r_mbytes_per_sec": 0, 00:17:20.833 "w_mbytes_per_sec": 0 00:17:20.833 }, 00:17:20.833 "claimed": false, 00:17:20.833 "zoned": false, 00:17:20.833 "supported_io_types": { 00:17:20.833 "read": true, 00:17:20.833 "write": true, 00:17:20.833 "unmap": false, 00:17:20.833 "flush": false, 00:17:20.833 "reset": true, 00:17:20.833 "nvme_admin": false, 00:17:20.833 "nvme_io": false, 00:17:20.833 "nvme_io_md": false, 00:17:20.833 "write_zeroes": true, 00:17:20.833 "zcopy": false, 00:17:20.833 "get_zone_info": false, 00:17:20.833 "zone_management": false, 00:17:20.833 "zone_append": false, 00:17:20.833 "compare": false, 00:17:20.833 "compare_and_write": false, 00:17:20.833 "abort": false, 00:17:20.833 "seek_hole": false, 00:17:20.833 "seek_data": false, 00:17:20.833 "copy": false, 00:17:20.833 "nvme_iov_md": false 00:17:20.833 }, 00:17:20.833 "memory_domains": [ 00:17:20.833 { 00:17:20.833 "dma_device_id": "system", 00:17:20.833 "dma_device_type": 1 00:17:20.833 }, 00:17:20.833 { 00:17:20.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.833 "dma_device_type": 2 00:17:20.833 }, 00:17:20.833 { 00:17:20.833 "dma_device_id": "system", 00:17:20.833 "dma_device_type": 1 00:17:20.833 }, 00:17:20.833 { 00:17:20.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.833 "dma_device_type": 2 00:17:20.833 } 00:17:20.833 ], 00:17:20.833 "driver_specific": { 00:17:20.833 "raid": { 00:17:20.833 "uuid": "880566c5-19c5-43d2-a4ff-177243097b09", 00:17:20.833 "strip_size_kb": 0, 00:17:20.833 "state": "online", 00:17:20.833 "raid_level": "raid1", 00:17:20.833 "superblock": true, 00:17:20.833 "num_base_bdevs": 2, 00:17:20.833 "num_base_bdevs_discovered": 2, 00:17:20.833 "num_base_bdevs_operational": 2, 00:17:20.833 "base_bdevs_list": [ 00:17:20.833 { 00:17:20.833 "name": "BaseBdev1", 00:17:20.833 "uuid": "64a4b392-30e1-4504-ac77-af1d7b52e8f3", 00:17:20.833 "is_configured": true, 00:17:20.833 "data_offset": 256, 00:17:20.833 "data_size": 7936 00:17:20.833 }, 00:17:20.833 { 00:17:20.833 "name": "BaseBdev2", 00:17:20.833 "uuid": "0ed707fa-e36d-4b08-9b61-a2234c9496e8", 00:17:20.833 "is_configured": true, 00:17:20.833 "data_offset": 256, 00:17:20.833 "data_size": 7936 00:17:20.833 } 00:17:20.833 ] 00:17:20.833 } 00:17:20.833 } 00:17:20.833 }' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:20.833 BaseBdev2' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.833 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.834 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:20.834 [2024-12-12 05:55:28.322529] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.094 "name": "Existed_Raid", 00:17:21.094 "uuid": "880566c5-19c5-43d2-a4ff-177243097b09", 00:17:21.094 "strip_size_kb": 0, 00:17:21.094 "state": "online", 00:17:21.094 "raid_level": "raid1", 00:17:21.094 "superblock": true, 00:17:21.094 "num_base_bdevs": 2, 00:17:21.094 "num_base_bdevs_discovered": 1, 00:17:21.094 "num_base_bdevs_operational": 1, 00:17:21.094 "base_bdevs_list": [ 00:17:21.094 { 00:17:21.094 "name": null, 00:17:21.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:21.094 "is_configured": false, 00:17:21.094 "data_offset": 0, 00:17:21.094 "data_size": 7936 00:17:21.094 }, 00:17:21.094 { 00:17:21.094 "name": "BaseBdev2", 00:17:21.094 "uuid": "0ed707fa-e36d-4b08-9b61-a2234c9496e8", 00:17:21.094 "is_configured": true, 00:17:21.094 "data_offset": 256, 00:17:21.094 "data_size": 7936 00:17:21.094 } 00:17:21.094 ] 00:17:21.094 }' 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.094 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.354 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:21.354 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.614 05:55:28 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.614 [2024-12-12 05:55:28.926730] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:21.614 [2024-12-12 05:55:28.926828] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.614 [2024-12-12 05:55:29.016902] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.614 [2024-12-12 05:55:29.017041] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.614 [2024-12-12 05:55:29.017058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 85445 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85445 ']' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85445 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85445 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85445' 00:17:21.614 killing process with pid 85445 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85445 00:17:21.614 [2024-12-12 05:55:29.110159] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:21.614 05:55:29 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85445 00:17:21.614 [2024-12-12 05:55:29.126510] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:22.996 05:55:30 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:17:22.996 00:17:22.996 real 0m5.022s 00:17:22.996 user 0m7.276s 00:17:22.996 sys 0m0.906s 00:17:22.996 05:55:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.996 05:55:30 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.996 ************************************ 00:17:22.996 END TEST raid_state_function_test_sb_4k 00:17:22.996 ************************************ 00:17:22.996 05:55:30 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:22.996 05:55:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:22.996 05:55:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.996 05:55:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:22.996 ************************************ 00:17:22.996 START TEST raid_superblock_test_4k 00:17:22.996 ************************************ 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:22.996 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=85663 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 85663 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 85663 ']' 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.997 05:55:30 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:22.997 [2024-12-12 05:55:30.339451] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:17:22.997 [2024-12-12 05:55:30.339605] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85663 ] 00:17:22.997 [2024-12-12 05:55:30.512933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.256 [2024-12-12 05:55:30.615168] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.516 [2024-12-12 05:55:30.800562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.516 [2024-12-12 05:55:30.800594] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 malloc1 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.775 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.775 [2024-12-12 05:55:31.193718] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.775 [2024-12-12 05:55:31.193845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.776 [2024-12-12 05:55:31.193883] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:23.776 [2024-12-12 05:55:31.193911] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.776 [2024-12-12 05:55:31.196088] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.776 [2024-12-12 05:55:31.196187] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.776 pt1 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.776 malloc2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.776 [2024-12-12 05:55:31.245995] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.776 [2024-12-12 05:55:31.246048] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.776 [2024-12-12 05:55:31.246086] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:23.776 [2024-12-12 05:55:31.246095] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.776 [2024-12-12 05:55:31.248079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.776 [2024-12-12 05:55:31.248116] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.776 pt2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.776 [2024-12-12 05:55:31.258019] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.776 [2024-12-12 05:55:31.259716] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.776 [2024-12-12 05:55:31.259881] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:23.776 [2024-12-12 05:55:31.259898] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.776 [2024-12-12 05:55:31.260121] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:23.776 [2024-12-12 05:55:31.260266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:23.776 [2024-12-12 05:55:31.260280] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:23.776 [2024-12-12 05:55:31.260414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.776 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.036 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.036 "name": "raid_bdev1", 00:17:24.036 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:24.036 "strip_size_kb": 0, 00:17:24.036 "state": "online", 00:17:24.036 "raid_level": "raid1", 00:17:24.036 "superblock": true, 00:17:24.036 "num_base_bdevs": 2, 00:17:24.036 "num_base_bdevs_discovered": 2, 00:17:24.036 "num_base_bdevs_operational": 2, 00:17:24.036 "base_bdevs_list": [ 00:17:24.036 { 00:17:24.036 "name": "pt1", 00:17:24.036 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.036 "is_configured": true, 00:17:24.036 "data_offset": 256, 00:17:24.036 "data_size": 7936 00:17:24.036 }, 00:17:24.036 { 00:17:24.036 "name": "pt2", 00:17:24.036 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.036 "is_configured": true, 00:17:24.036 "data_offset": 256, 00:17:24.036 "data_size": 7936 00:17:24.036 } 00:17:24.036 ] 00:17:24.036 }' 00:17:24.036 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.036 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.295 [2024-12-12 05:55:31.777372] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.295 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.555 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:24.555 "name": "raid_bdev1", 00:17:24.555 "aliases": [ 00:17:24.555 "5666ae46-82f8-4b05-ac60-d9f5ed8b714d" 00:17:24.555 ], 00:17:24.555 "product_name": "Raid Volume", 00:17:24.555 "block_size": 4096, 00:17:24.555 "num_blocks": 7936, 00:17:24.555 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:24.555 "assigned_rate_limits": { 00:17:24.555 "rw_ios_per_sec": 0, 00:17:24.556 "rw_mbytes_per_sec": 0, 00:17:24.556 "r_mbytes_per_sec": 0, 00:17:24.556 "w_mbytes_per_sec": 0 00:17:24.556 }, 00:17:24.556 "claimed": false, 00:17:24.556 "zoned": false, 00:17:24.556 "supported_io_types": { 00:17:24.556 "read": true, 00:17:24.556 "write": true, 00:17:24.556 "unmap": false, 00:17:24.556 "flush": false, 00:17:24.556 "reset": true, 00:17:24.556 "nvme_admin": false, 00:17:24.556 "nvme_io": false, 00:17:24.556 "nvme_io_md": false, 00:17:24.556 "write_zeroes": true, 00:17:24.556 "zcopy": false, 00:17:24.556 "get_zone_info": false, 00:17:24.556 "zone_management": false, 00:17:24.556 "zone_append": false, 00:17:24.556 "compare": false, 00:17:24.556 "compare_and_write": false, 00:17:24.556 "abort": false, 00:17:24.556 "seek_hole": false, 00:17:24.556 "seek_data": false, 00:17:24.556 "copy": false, 00:17:24.556 "nvme_iov_md": false 00:17:24.556 }, 00:17:24.556 "memory_domains": [ 00:17:24.556 { 00:17:24.556 "dma_device_id": "system", 00:17:24.556 "dma_device_type": 1 00:17:24.556 }, 00:17:24.556 { 00:17:24.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.556 "dma_device_type": 2 00:17:24.556 }, 00:17:24.556 { 00:17:24.556 "dma_device_id": "system", 00:17:24.556 "dma_device_type": 1 00:17:24.556 }, 00:17:24.556 { 00:17:24.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:24.556 "dma_device_type": 2 00:17:24.556 } 00:17:24.556 ], 00:17:24.556 "driver_specific": { 00:17:24.556 "raid": { 00:17:24.556 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:24.556 "strip_size_kb": 0, 00:17:24.556 "state": "online", 00:17:24.556 "raid_level": "raid1", 00:17:24.556 "superblock": true, 00:17:24.556 "num_base_bdevs": 2, 00:17:24.556 "num_base_bdevs_discovered": 2, 00:17:24.556 "num_base_bdevs_operational": 2, 00:17:24.556 "base_bdevs_list": [ 00:17:24.556 { 00:17:24.556 "name": "pt1", 00:17:24.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.556 "is_configured": true, 00:17:24.556 "data_offset": 256, 00:17:24.556 "data_size": 7936 00:17:24.556 }, 00:17:24.556 { 00:17:24.556 "name": "pt2", 00:17:24.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.556 "is_configured": true, 00:17:24.556 "data_offset": 256, 00:17:24.556 "data_size": 7936 00:17:24.556 } 00:17:24.556 ] 00:17:24.556 } 00:17:24.556 } 00:17:24.556 }' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:24.556 pt2' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 05:55:31 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 [2024-12-12 05:55:32.024953] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5666ae46-82f8-4b05-ac60-d9f5ed8b714d 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5666ae46-82f8-4b05-ac60-d9f5ed8b714d ']' 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 [2024-12-12 05:55:32.052684] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.556 [2024-12-12 05:55:32.052706] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.556 [2024-12-12 05:55:32.052772] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.556 [2024-12-12 05:55:32.052835] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.556 [2024-12-12 05:55:32.052848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.556 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.816 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.816 [2024-12-12 05:55:32.188450] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:24.816 [2024-12-12 05:55:32.190079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:24.816 [2024-12-12 05:55:32.190133] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:24.816 [2024-12-12 05:55:32.190198] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:24.816 [2024-12-12 05:55:32.190212] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.817 [2024-12-12 05:55:32.190220] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:24.817 request: 00:17:24.817 { 00:17:24.817 "name": "raid_bdev1", 00:17:24.817 "raid_level": "raid1", 00:17:24.817 "base_bdevs": [ 00:17:24.817 "malloc1", 00:17:24.817 "malloc2" 00:17:24.817 ], 00:17:24.817 "superblock": false, 00:17:24.817 "method": "bdev_raid_create", 00:17:24.817 "req_id": 1 00:17:24.817 } 00:17:24.817 Got JSON-RPC error response 00:17:24.817 response: 00:17:24.817 { 00:17:24.817 "code": -17, 00:17:24.817 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:24.817 } 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.817 [2024-12-12 05:55:32.252329] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.817 [2024-12-12 05:55:32.252418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.817 [2024-12-12 05:55:32.252449] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:24.817 [2024-12-12 05:55:32.252476] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.817 [2024-12-12 05:55:32.254539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.817 [2024-12-12 05:55:32.254639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.817 [2024-12-12 05:55:32.254732] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:24.817 [2024-12-12 05:55:32.254824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.817 pt1 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.817 "name": "raid_bdev1", 00:17:24.817 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:24.817 "strip_size_kb": 0, 00:17:24.817 "state": "configuring", 00:17:24.817 "raid_level": "raid1", 00:17:24.817 "superblock": true, 00:17:24.817 "num_base_bdevs": 2, 00:17:24.817 "num_base_bdevs_discovered": 1, 00:17:24.817 "num_base_bdevs_operational": 2, 00:17:24.817 "base_bdevs_list": [ 00:17:24.817 { 00:17:24.817 "name": "pt1", 00:17:24.817 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:24.817 "is_configured": true, 00:17:24.817 "data_offset": 256, 00:17:24.817 "data_size": 7936 00:17:24.817 }, 00:17:24.817 { 00:17:24.817 "name": null, 00:17:24.817 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.817 "is_configured": false, 00:17:24.817 "data_offset": 256, 00:17:24.817 "data_size": 7936 00:17:24.817 } 00:17:24.817 ] 00:17:24.817 }' 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.817 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.386 [2024-12-12 05:55:32.711565] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.386 [2024-12-12 05:55:32.711616] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.386 [2024-12-12 05:55:32.711651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:25.386 [2024-12-12 05:55:32.711661] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.386 [2024-12-12 05:55:32.712021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.386 [2024-12-12 05:55:32.712053] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.386 [2024-12-12 05:55:32.712110] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.386 [2024-12-12 05:55:32.712146] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.386 [2024-12-12 05:55:32.712261] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:25.386 [2024-12-12 05:55:32.712277] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.386 [2024-12-12 05:55:32.712517] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:25.386 [2024-12-12 05:55:32.712659] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:25.386 [2024-12-12 05:55:32.712667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:25.386 [2024-12-12 05:55:32.712792] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.386 pt2 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.386 "name": "raid_bdev1", 00:17:25.386 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:25.386 "strip_size_kb": 0, 00:17:25.386 "state": "online", 00:17:25.386 "raid_level": "raid1", 00:17:25.386 "superblock": true, 00:17:25.386 "num_base_bdevs": 2, 00:17:25.386 "num_base_bdevs_discovered": 2, 00:17:25.386 "num_base_bdevs_operational": 2, 00:17:25.386 "base_bdevs_list": [ 00:17:25.386 { 00:17:25.386 "name": "pt1", 00:17:25.386 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.386 "is_configured": true, 00:17:25.386 "data_offset": 256, 00:17:25.386 "data_size": 7936 00:17:25.386 }, 00:17:25.386 { 00:17:25.386 "name": "pt2", 00:17:25.386 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.386 "is_configured": true, 00:17:25.386 "data_offset": 256, 00:17:25.386 "data_size": 7936 00:17:25.386 } 00:17:25.386 ] 00:17:25.386 }' 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.386 05:55:32 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.646 [2024-12-12 05:55:33.143010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.646 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:25.906 "name": "raid_bdev1", 00:17:25.906 "aliases": [ 00:17:25.906 "5666ae46-82f8-4b05-ac60-d9f5ed8b714d" 00:17:25.906 ], 00:17:25.906 "product_name": "Raid Volume", 00:17:25.906 "block_size": 4096, 00:17:25.906 "num_blocks": 7936, 00:17:25.906 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:25.906 "assigned_rate_limits": { 00:17:25.906 "rw_ios_per_sec": 0, 00:17:25.906 "rw_mbytes_per_sec": 0, 00:17:25.906 "r_mbytes_per_sec": 0, 00:17:25.906 "w_mbytes_per_sec": 0 00:17:25.906 }, 00:17:25.906 "claimed": false, 00:17:25.906 "zoned": false, 00:17:25.906 "supported_io_types": { 00:17:25.906 "read": true, 00:17:25.906 "write": true, 00:17:25.906 "unmap": false, 00:17:25.906 "flush": false, 00:17:25.906 "reset": true, 00:17:25.906 "nvme_admin": false, 00:17:25.906 "nvme_io": false, 00:17:25.906 "nvme_io_md": false, 00:17:25.906 "write_zeroes": true, 00:17:25.906 "zcopy": false, 00:17:25.906 "get_zone_info": false, 00:17:25.906 "zone_management": false, 00:17:25.906 "zone_append": false, 00:17:25.906 "compare": false, 00:17:25.906 "compare_and_write": false, 00:17:25.906 "abort": false, 00:17:25.906 "seek_hole": false, 00:17:25.906 "seek_data": false, 00:17:25.906 "copy": false, 00:17:25.906 "nvme_iov_md": false 00:17:25.906 }, 00:17:25.906 "memory_domains": [ 00:17:25.906 { 00:17:25.906 "dma_device_id": "system", 00:17:25.906 "dma_device_type": 1 00:17:25.906 }, 00:17:25.906 { 00:17:25.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.906 "dma_device_type": 2 00:17:25.906 }, 00:17:25.906 { 00:17:25.906 "dma_device_id": "system", 00:17:25.906 "dma_device_type": 1 00:17:25.906 }, 00:17:25.906 { 00:17:25.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:25.906 "dma_device_type": 2 00:17:25.906 } 00:17:25.906 ], 00:17:25.906 "driver_specific": { 00:17:25.906 "raid": { 00:17:25.906 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:25.906 "strip_size_kb": 0, 00:17:25.906 "state": "online", 00:17:25.906 "raid_level": "raid1", 00:17:25.906 "superblock": true, 00:17:25.906 "num_base_bdevs": 2, 00:17:25.906 "num_base_bdevs_discovered": 2, 00:17:25.906 "num_base_bdevs_operational": 2, 00:17:25.906 "base_bdevs_list": [ 00:17:25.906 { 00:17:25.906 "name": "pt1", 00:17:25.906 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.906 "is_configured": true, 00:17:25.906 "data_offset": 256, 00:17:25.906 "data_size": 7936 00:17:25.906 }, 00:17:25.906 { 00:17:25.906 "name": "pt2", 00:17:25.906 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.906 "is_configured": true, 00:17:25.906 "data_offset": 256, 00:17:25.906 "data_size": 7936 00:17:25.906 } 00:17:25.906 ] 00:17:25.906 } 00:17:25.906 } 00:17:25.906 }' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:25.906 pt2' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.906 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.907 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:25.907 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:25.907 [2024-12-12 05:55:33.378707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.907 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5666ae46-82f8-4b05-ac60-d9f5ed8b714d '!=' 5666ae46-82f8-4b05-ac60-d9f5ed8b714d ']' 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.167 [2024-12-12 05:55:33.434400] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.167 "name": "raid_bdev1", 00:17:26.167 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:26.167 "strip_size_kb": 0, 00:17:26.167 "state": "online", 00:17:26.167 "raid_level": "raid1", 00:17:26.167 "superblock": true, 00:17:26.167 "num_base_bdevs": 2, 00:17:26.167 "num_base_bdevs_discovered": 1, 00:17:26.167 "num_base_bdevs_operational": 1, 00:17:26.167 "base_bdevs_list": [ 00:17:26.167 { 00:17:26.167 "name": null, 00:17:26.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.167 "is_configured": false, 00:17:26.167 "data_offset": 0, 00:17:26.167 "data_size": 7936 00:17:26.167 }, 00:17:26.167 { 00:17:26.167 "name": "pt2", 00:17:26.167 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.167 "is_configured": true, 00:17:26.167 "data_offset": 256, 00:17:26.167 "data_size": 7936 00:17:26.167 } 00:17:26.167 ] 00:17:26.167 }' 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.167 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.427 [2024-12-12 05:55:33.909562] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.427 [2024-12-12 05:55:33.909625] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.427 [2024-12-12 05:55:33.909693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.427 [2024-12-12 05:55:33.909744] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.427 [2024-12-12 05:55:33.909819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.427 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.687 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.688 [2024-12-12 05:55:33.977423] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:26.688 [2024-12-12 05:55:33.977472] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:26.688 [2024-12-12 05:55:33.977487] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:26.688 [2024-12-12 05:55:33.977496] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:26.688 [2024-12-12 05:55:33.979580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:26.688 [2024-12-12 05:55:33.979616] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:26.688 [2024-12-12 05:55:33.979680] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:26.688 [2024-12-12 05:55:33.979723] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:26.688 [2024-12-12 05:55:33.979841] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:26.688 [2024-12-12 05:55:33.979856] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:26.688 [2024-12-12 05:55:33.980097] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:26.688 [2024-12-12 05:55:33.980247] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:26.688 [2024-12-12 05:55:33.980256] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:26.688 [2024-12-12 05:55:33.980384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:26.688 pt2 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.688 05:55:33 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.688 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.688 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:26.688 "name": "raid_bdev1", 00:17:26.688 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:26.688 "strip_size_kb": 0, 00:17:26.688 "state": "online", 00:17:26.688 "raid_level": "raid1", 00:17:26.688 "superblock": true, 00:17:26.688 "num_base_bdevs": 2, 00:17:26.688 "num_base_bdevs_discovered": 1, 00:17:26.688 "num_base_bdevs_operational": 1, 00:17:26.688 "base_bdevs_list": [ 00:17:26.688 { 00:17:26.688 "name": null, 00:17:26.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.688 "is_configured": false, 00:17:26.688 "data_offset": 256, 00:17:26.688 "data_size": 7936 00:17:26.688 }, 00:17:26.688 { 00:17:26.688 "name": "pt2", 00:17:26.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.688 "is_configured": true, 00:17:26.688 "data_offset": 256, 00:17:26.688 "data_size": 7936 00:17:26.688 } 00:17:26.688 ] 00:17:26.688 }' 00:17:26.688 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:26.688 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:26.948 [2024-12-12 05:55:34.452583] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:26.948 [2024-12-12 05:55:34.452648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:26.948 [2024-12-12 05:55:34.452717] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:26.948 [2024-12-12 05:55:34.452770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:26.948 [2024-12-12 05:55:34.452852] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.948 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.208 [2024-12-12 05:55:34.516494] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:27.208 [2024-12-12 05:55:34.516612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:27.208 [2024-12-12 05:55:34.516651] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:27.208 [2024-12-12 05:55:34.516681] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:27.208 [2024-12-12 05:55:34.518763] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:27.208 [2024-12-12 05:55:34.518837] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:27.208 [2024-12-12 05:55:34.518934] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:27.208 [2024-12-12 05:55:34.519024] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:27.208 [2024-12-12 05:55:34.519200] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:27.208 [2024-12-12 05:55:34.519255] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:27.208 [2024-12-12 05:55:34.519297] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:27.208 [2024-12-12 05:55:34.519415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:27.208 [2024-12-12 05:55:34.519533] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:27.208 [2024-12-12 05:55:34.519574] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:27.208 [2024-12-12 05:55:34.519838] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:27.208 [2024-12-12 05:55:34.520031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:27.208 [2024-12-12 05:55:34.520079] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:27.208 [2024-12-12 05:55:34.520292] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:27.208 pt1 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:27.208 "name": "raid_bdev1", 00:17:27.208 "uuid": "5666ae46-82f8-4b05-ac60-d9f5ed8b714d", 00:17:27.208 "strip_size_kb": 0, 00:17:27.208 "state": "online", 00:17:27.208 "raid_level": "raid1", 00:17:27.208 "superblock": true, 00:17:27.208 "num_base_bdevs": 2, 00:17:27.208 "num_base_bdevs_discovered": 1, 00:17:27.208 "num_base_bdevs_operational": 1, 00:17:27.208 "base_bdevs_list": [ 00:17:27.208 { 00:17:27.208 "name": null, 00:17:27.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.208 "is_configured": false, 00:17:27.208 "data_offset": 256, 00:17:27.208 "data_size": 7936 00:17:27.208 }, 00:17:27.208 { 00:17:27.208 "name": "pt2", 00:17:27.208 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.208 "is_configured": true, 00:17:27.208 "data_offset": 256, 00:17:27.208 "data_size": 7936 00:17:27.208 } 00:17:27.208 ] 00:17:27.208 }' 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:27.208 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.468 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:27.468 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:27.468 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.468 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.468 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.728 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:27.728 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:27.729 05:55:34 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:27.729 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.729 05:55:34 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:27.729 [2024-12-12 05:55:35.003818] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5666ae46-82f8-4b05-ac60-d9f5ed8b714d '!=' 5666ae46-82f8-4b05-ac60-d9f5ed8b714d ']' 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 85663 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 85663 ']' 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 85663 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85663 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85663' 00:17:27.729 killing process with pid 85663 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 85663 00:17:27.729 [2024-12-12 05:55:35.073560] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:27.729 [2024-12-12 05:55:35.073618] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:27.729 [2024-12-12 05:55:35.073652] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:27.729 [2024-12-12 05:55:35.073664] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:27.729 05:55:35 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 85663 00:17:27.988 [2024-12-12 05:55:35.268617] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:28.927 ************************************ 00:17:28.927 END TEST raid_superblock_test_4k 00:17:28.927 ************************************ 00:17:28.927 05:55:36 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:17:28.927 00:17:28.927 real 0m6.073s 00:17:28.927 user 0m9.215s 00:17:28.927 sys 0m1.122s 00:17:28.927 05:55:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.927 05:55:36 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:28.927 05:55:36 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:17:28.927 05:55:36 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:17:28.927 05:55:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:28.927 05:55:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.927 05:55:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:28.927 ************************************ 00:17:28.927 START TEST raid_rebuild_test_sb_4k 00:17:28.927 ************************************ 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:28.927 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=85954 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 85954 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 85954 ']' 00:17:28.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:28.928 05:55:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:29.188 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:29.188 Zero copy mechanism will not be used. 00:17:29.188 [2024-12-12 05:55:36.492832] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:17:29.188 [2024-12-12 05:55:36.492940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85954 ] 00:17:29.188 [2024-12-12 05:55:36.664077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.447 [2024-12-12 05:55:36.768379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.447 [2024-12-12 05:55:36.960020] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:29.447 [2024-12-12 05:55:36.960071] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.017 BaseBdev1_malloc 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.017 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.017 [2024-12-12 05:55:37.353075] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:30.017 [2024-12-12 05:55:37.353150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.018 [2024-12-12 05:55:37.353172] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:30.018 [2024-12-12 05:55:37.353183] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.018 [2024-12-12 05:55:37.355307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.018 [2024-12-12 05:55:37.355350] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:30.018 BaseBdev1 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 BaseBdev2_malloc 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 [2024-12-12 05:55:37.408219] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:30.018 [2024-12-12 05:55:37.408280] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.018 [2024-12-12 05:55:37.408297] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:30.018 [2024-12-12 05:55:37.408308] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.018 [2024-12-12 05:55:37.410288] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.018 [2024-12-12 05:55:37.410328] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:30.018 BaseBdev2 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 spare_malloc 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 spare_delay 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 [2024-12-12 05:55:37.492963] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:30.018 [2024-12-12 05:55:37.493026] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:30.018 [2024-12-12 05:55:37.493044] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:30.018 [2024-12-12 05:55:37.493055] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:30.018 [2024-12-12 05:55:37.495215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:30.018 [2024-12-12 05:55:37.495295] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:30.018 spare 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 [2024-12-12 05:55:37.504993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:30.018 [2024-12-12 05:55:37.506763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.018 [2024-12-12 05:55:37.506931] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:30.018 [2024-12-12 05:55:37.506946] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.018 [2024-12-12 05:55:37.507172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:30.018 [2024-12-12 05:55:37.507325] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:30.018 [2024-12-12 05:55:37.507334] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:30.018 [2024-12-12 05:55:37.507476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.018 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.278 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.278 "name": "raid_bdev1", 00:17:30.278 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:30.278 "strip_size_kb": 0, 00:17:30.278 "state": "online", 00:17:30.278 "raid_level": "raid1", 00:17:30.278 "superblock": true, 00:17:30.278 "num_base_bdevs": 2, 00:17:30.278 "num_base_bdevs_discovered": 2, 00:17:30.278 "num_base_bdevs_operational": 2, 00:17:30.278 "base_bdevs_list": [ 00:17:30.278 { 00:17:30.278 "name": "BaseBdev1", 00:17:30.278 "uuid": "1a985c36-896e-5ddb-987a-1d08cd3819f3", 00:17:30.278 "is_configured": true, 00:17:30.278 "data_offset": 256, 00:17:30.278 "data_size": 7936 00:17:30.278 }, 00:17:30.278 { 00:17:30.278 "name": "BaseBdev2", 00:17:30.278 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:30.278 "is_configured": true, 00:17:30.278 "data_offset": 256, 00:17:30.278 "data_size": 7936 00:17:30.278 } 00:17:30.278 ] 00:17:30.278 }' 00:17:30.278 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.278 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.538 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:30.538 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.538 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.538 05:55:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:30.538 [2024-12-12 05:55:37.992410] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:30.538 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:30.798 [2024-12-12 05:55:38.263776] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:30.798 /dev/nbd0 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:30.798 1+0 records in 00:17:30.798 1+0 records out 00:17:30.798 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000423216 s, 9.7 MB/s 00:17:30.798 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:17:31.068 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:17:31.654 7936+0 records in 00:17:31.654 7936+0 records out 00:17:31.654 32505856 bytes (33 MB, 31 MiB) copied, 0.587375 s, 55.3 MB/s 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:31.654 05:55:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:31.654 [2024-12-12 05:55:39.127789] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.654 [2024-12-12 05:55:39.143871] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:31.654 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.914 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:31.914 "name": "raid_bdev1", 00:17:31.914 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:31.914 "strip_size_kb": 0, 00:17:31.914 "state": "online", 00:17:31.914 "raid_level": "raid1", 00:17:31.914 "superblock": true, 00:17:31.914 "num_base_bdevs": 2, 00:17:31.914 "num_base_bdevs_discovered": 1, 00:17:31.914 "num_base_bdevs_operational": 1, 00:17:31.914 "base_bdevs_list": [ 00:17:31.914 { 00:17:31.914 "name": null, 00:17:31.914 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:31.914 "is_configured": false, 00:17:31.914 "data_offset": 0, 00:17:31.914 "data_size": 7936 00:17:31.914 }, 00:17:31.914 { 00:17:31.914 "name": "BaseBdev2", 00:17:31.914 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:31.914 "is_configured": true, 00:17:31.914 "data_offset": 256, 00:17:31.914 "data_size": 7936 00:17:31.914 } 00:17:31.914 ] 00:17:31.914 }' 00:17:31.914 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:31.914 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.174 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.174 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.174 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:32.174 [2024-12-12 05:55:39.631041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.174 [2024-12-12 05:55:39.646109] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:17:32.174 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.174 05:55:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:32.174 [2024-12-12 05:55:39.647930] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.556 "name": "raid_bdev1", 00:17:33.556 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:33.556 "strip_size_kb": 0, 00:17:33.556 "state": "online", 00:17:33.556 "raid_level": "raid1", 00:17:33.556 "superblock": true, 00:17:33.556 "num_base_bdevs": 2, 00:17:33.556 "num_base_bdevs_discovered": 2, 00:17:33.556 "num_base_bdevs_operational": 2, 00:17:33.556 "process": { 00:17:33.556 "type": "rebuild", 00:17:33.556 "target": "spare", 00:17:33.556 "progress": { 00:17:33.556 "blocks": 2560, 00:17:33.556 "percent": 32 00:17:33.556 } 00:17:33.556 }, 00:17:33.556 "base_bdevs_list": [ 00:17:33.556 { 00:17:33.556 "name": "spare", 00:17:33.556 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:33.556 "is_configured": true, 00:17:33.556 "data_offset": 256, 00:17:33.556 "data_size": 7936 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "name": "BaseBdev2", 00:17:33.556 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:33.556 "is_configured": true, 00:17:33.556 "data_offset": 256, 00:17:33.556 "data_size": 7936 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 [2024-12-12 05:55:40.807624] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.556 [2024-12-12 05:55:40.852596] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:33.556 [2024-12-12 05:55:40.852656] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:33.556 [2024-12-12 05:55:40.852669] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:33.556 [2024-12-12 05:55:40.852678] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:33.556 "name": "raid_bdev1", 00:17:33.556 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:33.556 "strip_size_kb": 0, 00:17:33.556 "state": "online", 00:17:33.556 "raid_level": "raid1", 00:17:33.556 "superblock": true, 00:17:33.556 "num_base_bdevs": 2, 00:17:33.556 "num_base_bdevs_discovered": 1, 00:17:33.556 "num_base_bdevs_operational": 1, 00:17:33.556 "base_bdevs_list": [ 00:17:33.556 { 00:17:33.556 "name": null, 00:17:33.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:33.556 "is_configured": false, 00:17:33.556 "data_offset": 0, 00:17:33.556 "data_size": 7936 00:17:33.556 }, 00:17:33.556 { 00:17:33.556 "name": "BaseBdev2", 00:17:33.556 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:33.556 "is_configured": true, 00:17:33.556 "data_offset": 256, 00:17:33.556 "data_size": 7936 00:17:33.556 } 00:17:33.556 ] 00:17:33.556 }' 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:33.556 05:55:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:33.816 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:33.816 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.816 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:33.816 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:33.816 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.077 "name": "raid_bdev1", 00:17:34.077 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:34.077 "strip_size_kb": 0, 00:17:34.077 "state": "online", 00:17:34.077 "raid_level": "raid1", 00:17:34.077 "superblock": true, 00:17:34.077 "num_base_bdevs": 2, 00:17:34.077 "num_base_bdevs_discovered": 1, 00:17:34.077 "num_base_bdevs_operational": 1, 00:17:34.077 "base_bdevs_list": [ 00:17:34.077 { 00:17:34.077 "name": null, 00:17:34.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.077 "is_configured": false, 00:17:34.077 "data_offset": 0, 00:17:34.077 "data_size": 7936 00:17:34.077 }, 00:17:34.077 { 00:17:34.077 "name": "BaseBdev2", 00:17:34.077 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:34.077 "is_configured": true, 00:17:34.077 "data_offset": 256, 00:17:34.077 "data_size": 7936 00:17:34.077 } 00:17:34.077 ] 00:17:34.077 }' 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:34.077 [2024-12-12 05:55:41.473893] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:34.077 [2024-12-12 05:55:41.488851] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.077 05:55:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:34.077 [2024-12-12 05:55:41.490691] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.017 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.278 "name": "raid_bdev1", 00:17:35.278 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:35.278 "strip_size_kb": 0, 00:17:35.278 "state": "online", 00:17:35.278 "raid_level": "raid1", 00:17:35.278 "superblock": true, 00:17:35.278 "num_base_bdevs": 2, 00:17:35.278 "num_base_bdevs_discovered": 2, 00:17:35.278 "num_base_bdevs_operational": 2, 00:17:35.278 "process": { 00:17:35.278 "type": "rebuild", 00:17:35.278 "target": "spare", 00:17:35.278 "progress": { 00:17:35.278 "blocks": 2560, 00:17:35.278 "percent": 32 00:17:35.278 } 00:17:35.278 }, 00:17:35.278 "base_bdevs_list": [ 00:17:35.278 { 00:17:35.278 "name": "spare", 00:17:35.278 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 256, 00:17:35.278 "data_size": 7936 00:17:35.278 }, 00:17:35.278 { 00:17:35.278 "name": "BaseBdev2", 00:17:35.278 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 256, 00:17:35.278 "data_size": 7936 00:17:35.278 } 00:17:35.278 ] 00:17:35.278 }' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:35.278 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=656 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.278 "name": "raid_bdev1", 00:17:35.278 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:35.278 "strip_size_kb": 0, 00:17:35.278 "state": "online", 00:17:35.278 "raid_level": "raid1", 00:17:35.278 "superblock": true, 00:17:35.278 "num_base_bdevs": 2, 00:17:35.278 "num_base_bdevs_discovered": 2, 00:17:35.278 "num_base_bdevs_operational": 2, 00:17:35.278 "process": { 00:17:35.278 "type": "rebuild", 00:17:35.278 "target": "spare", 00:17:35.278 "progress": { 00:17:35.278 "blocks": 2816, 00:17:35.278 "percent": 35 00:17:35.278 } 00:17:35.278 }, 00:17:35.278 "base_bdevs_list": [ 00:17:35.278 { 00:17:35.278 "name": "spare", 00:17:35.278 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 256, 00:17:35.278 "data_size": 7936 00:17:35.278 }, 00:17:35.278 { 00:17:35.278 "name": "BaseBdev2", 00:17:35.278 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:35.278 "is_configured": true, 00:17:35.278 "data_offset": 256, 00:17:35.278 "data_size": 7936 00:17:35.278 } 00:17:35.278 ] 00:17:35.278 }' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.278 05:55:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.660 "name": "raid_bdev1", 00:17:36.660 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:36.660 "strip_size_kb": 0, 00:17:36.660 "state": "online", 00:17:36.660 "raid_level": "raid1", 00:17:36.660 "superblock": true, 00:17:36.660 "num_base_bdevs": 2, 00:17:36.660 "num_base_bdevs_discovered": 2, 00:17:36.660 "num_base_bdevs_operational": 2, 00:17:36.660 "process": { 00:17:36.660 "type": "rebuild", 00:17:36.660 "target": "spare", 00:17:36.660 "progress": { 00:17:36.660 "blocks": 5888, 00:17:36.660 "percent": 74 00:17:36.660 } 00:17:36.660 }, 00:17:36.660 "base_bdevs_list": [ 00:17:36.660 { 00:17:36.660 "name": "spare", 00:17:36.660 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:36.660 "is_configured": true, 00:17:36.660 "data_offset": 256, 00:17:36.660 "data_size": 7936 00:17:36.660 }, 00:17:36.660 { 00:17:36.660 "name": "BaseBdev2", 00:17:36.660 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:36.660 "is_configured": true, 00:17:36.660 "data_offset": 256, 00:17:36.660 "data_size": 7936 00:17:36.660 } 00:17:36.660 ] 00:17:36.660 }' 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.660 05:55:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.230 [2024-12-12 05:55:44.602424] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:37.230 [2024-12-12 05:55:44.602491] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:37.230 [2024-12-12 05:55:44.602609] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.585 "name": "raid_bdev1", 00:17:37.585 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:37.585 "strip_size_kb": 0, 00:17:37.585 "state": "online", 00:17:37.585 "raid_level": "raid1", 00:17:37.585 "superblock": true, 00:17:37.585 "num_base_bdevs": 2, 00:17:37.585 "num_base_bdevs_discovered": 2, 00:17:37.585 "num_base_bdevs_operational": 2, 00:17:37.585 "base_bdevs_list": [ 00:17:37.585 { 00:17:37.585 "name": "spare", 00:17:37.585 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:37.585 "is_configured": true, 00:17:37.585 "data_offset": 256, 00:17:37.585 "data_size": 7936 00:17:37.585 }, 00:17:37.585 { 00:17:37.585 "name": "BaseBdev2", 00:17:37.585 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:37.585 "is_configured": true, 00:17:37.585 "data_offset": 256, 00:17:37.585 "data_size": 7936 00:17:37.585 } 00:17:37.585 ] 00:17:37.585 }' 00:17:37.585 05:55:44 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.585 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.844 "name": "raid_bdev1", 00:17:37.844 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:37.844 "strip_size_kb": 0, 00:17:37.844 "state": "online", 00:17:37.844 "raid_level": "raid1", 00:17:37.844 "superblock": true, 00:17:37.844 "num_base_bdevs": 2, 00:17:37.844 "num_base_bdevs_discovered": 2, 00:17:37.844 "num_base_bdevs_operational": 2, 00:17:37.844 "base_bdevs_list": [ 00:17:37.844 { 00:17:37.844 "name": "spare", 00:17:37.844 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:37.844 "is_configured": true, 00:17:37.844 "data_offset": 256, 00:17:37.844 "data_size": 7936 00:17:37.844 }, 00:17:37.844 { 00:17:37.844 "name": "BaseBdev2", 00:17:37.844 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:37.844 "is_configured": true, 00:17:37.844 "data_offset": 256, 00:17:37.844 "data_size": 7936 00:17:37.844 } 00:17:37.844 ] 00:17:37.844 }' 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.844 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.845 "name": "raid_bdev1", 00:17:37.845 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:37.845 "strip_size_kb": 0, 00:17:37.845 "state": "online", 00:17:37.845 "raid_level": "raid1", 00:17:37.845 "superblock": true, 00:17:37.845 "num_base_bdevs": 2, 00:17:37.845 "num_base_bdevs_discovered": 2, 00:17:37.845 "num_base_bdevs_operational": 2, 00:17:37.845 "base_bdevs_list": [ 00:17:37.845 { 00:17:37.845 "name": "spare", 00:17:37.845 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 256, 00:17:37.845 "data_size": 7936 00:17:37.845 }, 00:17:37.845 { 00:17:37.845 "name": "BaseBdev2", 00:17:37.845 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:37.845 "is_configured": true, 00:17:37.845 "data_offset": 256, 00:17:37.845 "data_size": 7936 00:17:37.845 } 00:17:37.845 ] 00:17:37.845 }' 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.845 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.414 [2024-12-12 05:55:45.677834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.414 [2024-12-12 05:55:45.677864] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.414 [2024-12-12 05:55:45.677932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.414 [2024-12-12 05:55:45.677992] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.414 [2024-12-12 05:55:45.678003] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:38.414 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.415 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:38.675 /dev/nbd0 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.675 1+0 records in 00:17:38.675 1+0 records out 00:17:38.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417426 s, 9.8 MB/s 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.675 05:55:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:38.675 /dev/nbd1 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:38.935 1+0 records in 00:17:38.935 1+0 records out 00:17:38.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000446973 s, 9.2 MB/s 00:17:38.935 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.936 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:39.196 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.456 [2024-12-12 05:55:46.859704] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:39.456 [2024-12-12 05:55:46.859820] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:39.456 [2024-12-12 05:55:46.859862] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:39.456 [2024-12-12 05:55:46.859871] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:39.456 [2024-12-12 05:55:46.861994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:39.456 [2024-12-12 05:55:46.862082] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:39.456 [2024-12-12 05:55:46.862190] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:39.456 [2024-12-12 05:55:46.862259] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:39.456 [2024-12-12 05:55:46.862489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:39.456 spare 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.456 [2024-12-12 05:55:46.962470] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:39.456 [2024-12-12 05:55:46.962510] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:39.456 [2024-12-12 05:55:46.962778] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:17:39.456 [2024-12-12 05:55:46.962943] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:39.456 [2024-12-12 05:55:46.962979] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:39.456 [2024-12-12 05:55:46.963144] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.456 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.716 05:55:46 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.716 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:39.716 "name": "raid_bdev1", 00:17:39.716 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:39.716 "strip_size_kb": 0, 00:17:39.716 "state": "online", 00:17:39.716 "raid_level": "raid1", 00:17:39.716 "superblock": true, 00:17:39.716 "num_base_bdevs": 2, 00:17:39.716 "num_base_bdevs_discovered": 2, 00:17:39.716 "num_base_bdevs_operational": 2, 00:17:39.716 "base_bdevs_list": [ 00:17:39.716 { 00:17:39.716 "name": "spare", 00:17:39.716 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:39.716 "is_configured": true, 00:17:39.716 "data_offset": 256, 00:17:39.716 "data_size": 7936 00:17:39.716 }, 00:17:39.716 { 00:17:39.716 "name": "BaseBdev2", 00:17:39.716 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:39.716 "is_configured": true, 00:17:39.716 "data_offset": 256, 00:17:39.716 "data_size": 7936 00:17:39.716 } 00:17:39.716 ] 00:17:39.716 }' 00:17:39.716 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:39.716 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.976 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:40.236 "name": "raid_bdev1", 00:17:40.236 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:40.236 "strip_size_kb": 0, 00:17:40.236 "state": "online", 00:17:40.236 "raid_level": "raid1", 00:17:40.236 "superblock": true, 00:17:40.236 "num_base_bdevs": 2, 00:17:40.236 "num_base_bdevs_discovered": 2, 00:17:40.236 "num_base_bdevs_operational": 2, 00:17:40.236 "base_bdevs_list": [ 00:17:40.236 { 00:17:40.236 "name": "spare", 00:17:40.236 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:40.236 "is_configured": true, 00:17:40.236 "data_offset": 256, 00:17:40.236 "data_size": 7936 00:17:40.236 }, 00:17:40.236 { 00:17:40.236 "name": "BaseBdev2", 00:17:40.236 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:40.236 "is_configured": true, 00:17:40.236 "data_offset": 256, 00:17:40.236 "data_size": 7936 00:17:40.236 } 00:17:40.236 ] 00:17:40.236 }' 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.236 [2024-12-12 05:55:47.634618] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:40.236 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.237 "name": "raid_bdev1", 00:17:40.237 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:40.237 "strip_size_kb": 0, 00:17:40.237 "state": "online", 00:17:40.237 "raid_level": "raid1", 00:17:40.237 "superblock": true, 00:17:40.237 "num_base_bdevs": 2, 00:17:40.237 "num_base_bdevs_discovered": 1, 00:17:40.237 "num_base_bdevs_operational": 1, 00:17:40.237 "base_bdevs_list": [ 00:17:40.237 { 00:17:40.237 "name": null, 00:17:40.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:40.237 "is_configured": false, 00:17:40.237 "data_offset": 0, 00:17:40.237 "data_size": 7936 00:17:40.237 }, 00:17:40.237 { 00:17:40.237 "name": "BaseBdev2", 00:17:40.237 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:40.237 "is_configured": true, 00:17:40.237 "data_offset": 256, 00:17:40.237 "data_size": 7936 00:17:40.237 } 00:17:40.237 ] 00:17:40.237 }' 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.237 05:55:47 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.807 05:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:40.807 05:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.807 05:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:40.807 [2024-12-12 05:55:48.117812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.807 [2024-12-12 05:55:48.118073] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:40.807 [2024-12-12 05:55:48.118135] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:40.807 [2024-12-12 05:55:48.118201] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:40.807 [2024-12-12 05:55:48.133779] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:17:40.807 05:55:48 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.807 05:55:48 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:40.807 [2024-12-12 05:55:48.135586] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.745 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:41.745 "name": "raid_bdev1", 00:17:41.745 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:41.746 "strip_size_kb": 0, 00:17:41.746 "state": "online", 00:17:41.746 "raid_level": "raid1", 00:17:41.746 "superblock": true, 00:17:41.746 "num_base_bdevs": 2, 00:17:41.746 "num_base_bdevs_discovered": 2, 00:17:41.746 "num_base_bdevs_operational": 2, 00:17:41.746 "process": { 00:17:41.746 "type": "rebuild", 00:17:41.746 "target": "spare", 00:17:41.746 "progress": { 00:17:41.746 "blocks": 2560, 00:17:41.746 "percent": 32 00:17:41.746 } 00:17:41.746 }, 00:17:41.746 "base_bdevs_list": [ 00:17:41.746 { 00:17:41.746 "name": "spare", 00:17:41.746 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:41.746 "is_configured": true, 00:17:41.746 "data_offset": 256, 00:17:41.746 "data_size": 7936 00:17:41.746 }, 00:17:41.746 { 00:17:41.746 "name": "BaseBdev2", 00:17:41.746 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:41.746 "is_configured": true, 00:17:41.746 "data_offset": 256, 00:17:41.746 "data_size": 7936 00:17:41.746 } 00:17:41.746 ] 00:17:41.746 }' 00:17:41.746 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:41.746 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:41.746 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.006 [2024-12-12 05:55:49.298869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.006 [2024-12-12 05:55:49.340299] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:42.006 [2024-12-12 05:55:49.340417] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:42.006 [2024-12-12 05:55:49.340433] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:42.006 [2024-12-12 05:55:49.340442] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:42.006 "name": "raid_bdev1", 00:17:42.006 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:42.006 "strip_size_kb": 0, 00:17:42.006 "state": "online", 00:17:42.006 "raid_level": "raid1", 00:17:42.006 "superblock": true, 00:17:42.006 "num_base_bdevs": 2, 00:17:42.006 "num_base_bdevs_discovered": 1, 00:17:42.006 "num_base_bdevs_operational": 1, 00:17:42.006 "base_bdevs_list": [ 00:17:42.006 { 00:17:42.006 "name": null, 00:17:42.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:42.006 "is_configured": false, 00:17:42.006 "data_offset": 0, 00:17:42.006 "data_size": 7936 00:17:42.006 }, 00:17:42.006 { 00:17:42.006 "name": "BaseBdev2", 00:17:42.006 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:42.006 "is_configured": true, 00:17:42.006 "data_offset": 256, 00:17:42.006 "data_size": 7936 00:17:42.006 } 00:17:42.006 ] 00:17:42.006 }' 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:42.006 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.266 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:42.266 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.266 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:42.266 [2024-12-12 05:55:49.785311] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:42.266 [2024-12-12 05:55:49.785373] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:42.266 [2024-12-12 05:55:49.785392] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:42.266 [2024-12-12 05:55:49.785403] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:42.266 [2024-12-12 05:55:49.785889] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:42.266 [2024-12-12 05:55:49.785925] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:42.266 [2024-12-12 05:55:49.786009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:42.266 [2024-12-12 05:55:49.786024] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:42.266 [2024-12-12 05:55:49.786034] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:42.266 [2024-12-12 05:55:49.786064] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:42.526 [2024-12-12 05:55:49.800887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:17:42.526 spare 00:17:42.526 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.526 [2024-12-12 05:55:49.802642] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:42.526 05:55:49 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:43.466 "name": "raid_bdev1", 00:17:43.466 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:43.466 "strip_size_kb": 0, 00:17:43.466 "state": "online", 00:17:43.466 "raid_level": "raid1", 00:17:43.466 "superblock": true, 00:17:43.466 "num_base_bdevs": 2, 00:17:43.466 "num_base_bdevs_discovered": 2, 00:17:43.466 "num_base_bdevs_operational": 2, 00:17:43.466 "process": { 00:17:43.466 "type": "rebuild", 00:17:43.466 "target": "spare", 00:17:43.466 "progress": { 00:17:43.466 "blocks": 2560, 00:17:43.466 "percent": 32 00:17:43.466 } 00:17:43.466 }, 00:17:43.466 "base_bdevs_list": [ 00:17:43.466 { 00:17:43.466 "name": "spare", 00:17:43.466 "uuid": "e201b7b1-8395-5339-b6b4-d6ab7b9d6691", 00:17:43.466 "is_configured": true, 00:17:43.466 "data_offset": 256, 00:17:43.466 "data_size": 7936 00:17:43.466 }, 00:17:43.466 { 00:17:43.466 "name": "BaseBdev2", 00:17:43.466 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:43.466 "is_configured": true, 00:17:43.466 "data_offset": 256, 00:17:43.466 "data_size": 7936 00:17:43.466 } 00:17:43.466 ] 00:17:43.466 }' 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.466 05:55:50 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.466 [2024-12-12 05:55:50.966945] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.726 [2024-12-12 05:55:51.007308] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:43.726 [2024-12-12 05:55:51.007412] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.726 [2024-12-12 05:55:51.007450] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:43.726 [2024-12-12 05:55:51.007458] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.726 "name": "raid_bdev1", 00:17:43.726 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:43.726 "strip_size_kb": 0, 00:17:43.726 "state": "online", 00:17:43.726 "raid_level": "raid1", 00:17:43.726 "superblock": true, 00:17:43.726 "num_base_bdevs": 2, 00:17:43.726 "num_base_bdevs_discovered": 1, 00:17:43.726 "num_base_bdevs_operational": 1, 00:17:43.726 "base_bdevs_list": [ 00:17:43.726 { 00:17:43.726 "name": null, 00:17:43.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.726 "is_configured": false, 00:17:43.726 "data_offset": 0, 00:17:43.726 "data_size": 7936 00:17:43.726 }, 00:17:43.726 { 00:17:43.726 "name": "BaseBdev2", 00:17:43.726 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:43.726 "is_configured": true, 00:17:43.726 "data_offset": 256, 00:17:43.726 "data_size": 7936 00:17:43.726 } 00:17:43.726 ] 00:17:43.726 }' 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.726 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.986 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.245 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:44.245 "name": "raid_bdev1", 00:17:44.245 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:44.245 "strip_size_kb": 0, 00:17:44.245 "state": "online", 00:17:44.245 "raid_level": "raid1", 00:17:44.245 "superblock": true, 00:17:44.246 "num_base_bdevs": 2, 00:17:44.246 "num_base_bdevs_discovered": 1, 00:17:44.246 "num_base_bdevs_operational": 1, 00:17:44.246 "base_bdevs_list": [ 00:17:44.246 { 00:17:44.246 "name": null, 00:17:44.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:44.246 "is_configured": false, 00:17:44.246 "data_offset": 0, 00:17:44.246 "data_size": 7936 00:17:44.246 }, 00:17:44.246 { 00:17:44.246 "name": "BaseBdev2", 00:17:44.246 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:44.246 "is_configured": true, 00:17:44.246 "data_offset": 256, 00:17:44.246 "data_size": 7936 00:17:44.246 } 00:17:44.246 ] 00:17:44.246 }' 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:44.246 [2024-12-12 05:55:51.636303] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:44.246 [2024-12-12 05:55:51.636364] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.246 [2024-12-12 05:55:51.636402] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:44.246 [2024-12-12 05:55:51.636419] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.246 [2024-12-12 05:55:51.636876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.246 [2024-12-12 05:55:51.636907] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:44.246 [2024-12-12 05:55:51.637004] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:17:44.246 [2024-12-12 05:55:51.637019] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:44.246 [2024-12-12 05:55:51.637030] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:44.246 [2024-12-12 05:55:51.637041] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:17:44.246 BaseBdev1 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.246 05:55:51 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.185 "name": "raid_bdev1", 00:17:45.185 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:45.185 "strip_size_kb": 0, 00:17:45.185 "state": "online", 00:17:45.185 "raid_level": "raid1", 00:17:45.185 "superblock": true, 00:17:45.185 "num_base_bdevs": 2, 00:17:45.185 "num_base_bdevs_discovered": 1, 00:17:45.185 "num_base_bdevs_operational": 1, 00:17:45.185 "base_bdevs_list": [ 00:17:45.185 { 00:17:45.185 "name": null, 00:17:45.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.185 "is_configured": false, 00:17:45.185 "data_offset": 0, 00:17:45.185 "data_size": 7936 00:17:45.185 }, 00:17:45.185 { 00:17:45.185 "name": "BaseBdev2", 00:17:45.185 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:45.185 "is_configured": true, 00:17:45.185 "data_offset": 256, 00:17:45.185 "data_size": 7936 00:17:45.185 } 00:17:45.185 ] 00:17:45.185 }' 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.185 05:55:52 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.755 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.755 "name": "raid_bdev1", 00:17:45.755 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:45.755 "strip_size_kb": 0, 00:17:45.755 "state": "online", 00:17:45.755 "raid_level": "raid1", 00:17:45.755 "superblock": true, 00:17:45.755 "num_base_bdevs": 2, 00:17:45.755 "num_base_bdevs_discovered": 1, 00:17:45.755 "num_base_bdevs_operational": 1, 00:17:45.755 "base_bdevs_list": [ 00:17:45.756 { 00:17:45.756 "name": null, 00:17:45.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.756 "is_configured": false, 00:17:45.756 "data_offset": 0, 00:17:45.756 "data_size": 7936 00:17:45.756 }, 00:17:45.756 { 00:17:45.756 "name": "BaseBdev2", 00:17:45.756 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:45.756 "is_configured": true, 00:17:45.756 "data_offset": 256, 00:17:45.756 "data_size": 7936 00:17:45.756 } 00:17:45.756 ] 00:17:45.756 }' 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:45.756 [2024-12-12 05:55:53.253570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:45.756 [2024-12-12 05:55:53.253734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:17:45.756 [2024-12-12 05:55:53.253751] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:17:45.756 request: 00:17:45.756 { 00:17:45.756 "base_bdev": "BaseBdev1", 00:17:45.756 "raid_bdev": "raid_bdev1", 00:17:45.756 "method": "bdev_raid_add_base_bdev", 00:17:45.756 "req_id": 1 00:17:45.756 } 00:17:45.756 Got JSON-RPC error response 00:17:45.756 response: 00:17:45.756 { 00:17:45.756 "code": -22, 00:17:45.756 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:17:45.756 } 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:45.756 05:55:53 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:47.139 "name": "raid_bdev1", 00:17:47.139 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:47.139 "strip_size_kb": 0, 00:17:47.139 "state": "online", 00:17:47.139 "raid_level": "raid1", 00:17:47.139 "superblock": true, 00:17:47.139 "num_base_bdevs": 2, 00:17:47.139 "num_base_bdevs_discovered": 1, 00:17:47.139 "num_base_bdevs_operational": 1, 00:17:47.139 "base_bdevs_list": [ 00:17:47.139 { 00:17:47.139 "name": null, 00:17:47.139 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.139 "is_configured": false, 00:17:47.139 "data_offset": 0, 00:17:47.139 "data_size": 7936 00:17:47.139 }, 00:17:47.139 { 00:17:47.139 "name": "BaseBdev2", 00:17:47.139 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:47.139 "is_configured": true, 00:17:47.139 "data_offset": 256, 00:17:47.139 "data_size": 7936 00:17:47.139 } 00:17:47.139 ] 00:17:47.139 }' 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:47.139 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.399 "name": "raid_bdev1", 00:17:47.399 "uuid": "4747137c-71c5-4ff9-a49a-6ee1040bd09e", 00:17:47.399 "strip_size_kb": 0, 00:17:47.399 "state": "online", 00:17:47.399 "raid_level": "raid1", 00:17:47.399 "superblock": true, 00:17:47.399 "num_base_bdevs": 2, 00:17:47.399 "num_base_bdevs_discovered": 1, 00:17:47.399 "num_base_bdevs_operational": 1, 00:17:47.399 "base_bdevs_list": [ 00:17:47.399 { 00:17:47.399 "name": null, 00:17:47.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.399 "is_configured": false, 00:17:47.399 "data_offset": 0, 00:17:47.399 "data_size": 7936 00:17:47.399 }, 00:17:47.399 { 00:17:47.399 "name": "BaseBdev2", 00:17:47.399 "uuid": "d71fcfb4-bf80-5b9f-b76f-5cad0e04cb7d", 00:17:47.399 "is_configured": true, 00:17:47.399 "data_offset": 256, 00:17:47.399 "data_size": 7936 00:17:47.399 } 00:17:47.399 ] 00:17:47.399 }' 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 85954 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 85954 ']' 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 85954 00:17:47.399 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85954 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85954' 00:17:47.400 killing process with pid 85954 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 85954 00:17:47.400 Received shutdown signal, test time was about 60.000000 seconds 00:17:47.400 00:17:47.400 Latency(us) 00:17:47.400 [2024-12-12T05:55:54.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:47.400 [2024-12-12T05:55:54.922Z] =================================================================================================================== 00:17:47.400 [2024-12-12T05:55:54.922Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:47.400 [2024-12-12 05:55:54.846106] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.400 [2024-12-12 05:55:54.846219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.400 [2024-12-12 05:55:54.846268] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.400 05:55:54 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 85954 00:17:47.400 [2024-12-12 05:55:54.846278] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:17:47.660 [2024-12-12 05:55:55.130435] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:49.042 05:55:56 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:17:49.042 00:17:49.042 real 0m19.786s 00:17:49.042 user 0m25.877s 00:17:49.042 sys 0m2.675s 00:17:49.042 05:55:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:49.042 ************************************ 00:17:49.042 END TEST raid_rebuild_test_sb_4k 00:17:49.042 ************************************ 00:17:49.042 05:55:56 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:49.042 05:55:56 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:17:49.042 05:55:56 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:49.042 05:55:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:49.042 05:55:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:49.042 05:55:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:49.042 ************************************ 00:17:49.042 START TEST raid_state_function_test_sb_md_separate 00:17:49.042 ************************************ 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:49.042 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=86526 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86526' 00:17:49.043 Process raid pid: 86526 00:17:49.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 86526 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86526 ']' 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:49.043 05:55:56 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.043 [2024-12-12 05:55:56.368175] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:17:49.043 [2024-12-12 05:55:56.368287] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:49.043 [2024-12-12 05:55:56.551306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.303 [2024-12-12 05:55:56.657675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.563 [2024-12-12 05:55:56.847088] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.563 [2024-12-12 05:55:56.847125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.823 [2024-12-12 05:55:57.176590] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.823 [2024-12-12 05:55:57.176645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.823 [2024-12-12 05:55:57.176655] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.823 [2024-12-12 05:55:57.176679] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:49.823 "name": "Existed_Raid", 00:17:49.823 "uuid": "fa08d726-99b4-46ba-9871-3cd281591177", 00:17:49.823 "strip_size_kb": 0, 00:17:49.823 "state": "configuring", 00:17:49.823 "raid_level": "raid1", 00:17:49.823 "superblock": true, 00:17:49.823 "num_base_bdevs": 2, 00:17:49.823 "num_base_bdevs_discovered": 0, 00:17:49.823 "num_base_bdevs_operational": 2, 00:17:49.823 "base_bdevs_list": [ 00:17:49.823 { 00:17:49.823 "name": "BaseBdev1", 00:17:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.823 "is_configured": false, 00:17:49.823 "data_offset": 0, 00:17:49.823 "data_size": 0 00:17:49.823 }, 00:17:49.823 { 00:17:49.823 "name": "BaseBdev2", 00:17:49.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.823 "is_configured": false, 00:17:49.823 "data_offset": 0, 00:17:49.823 "data_size": 0 00:17:49.823 } 00:17:49.823 ] 00:17:49.823 }' 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:49.823 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 [2024-12-12 05:55:57.639704] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.393 [2024-12-12 05:55:57.639811] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 [2024-12-12 05:55:57.651684] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:50.393 [2024-12-12 05:55:57.651767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:50.393 [2024-12-12 05:55:57.651808] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.393 [2024-12-12 05:55:57.651832] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 [2024-12-12 05:55:57.701029] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.393 BaseBdev1 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.393 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.393 [ 00:17:50.393 { 00:17:50.393 "name": "BaseBdev1", 00:17:50.393 "aliases": [ 00:17:50.393 "982f21f9-a6ed-4cc2-bd8f-58dd22776386" 00:17:50.393 ], 00:17:50.393 "product_name": "Malloc disk", 00:17:50.393 "block_size": 4096, 00:17:50.393 "num_blocks": 8192, 00:17:50.393 "uuid": "982f21f9-a6ed-4cc2-bd8f-58dd22776386", 00:17:50.393 "md_size": 32, 00:17:50.393 "md_interleave": false, 00:17:50.393 "dif_type": 0, 00:17:50.393 "assigned_rate_limits": { 00:17:50.393 "rw_ios_per_sec": 0, 00:17:50.393 "rw_mbytes_per_sec": 0, 00:17:50.393 "r_mbytes_per_sec": 0, 00:17:50.393 "w_mbytes_per_sec": 0 00:17:50.393 }, 00:17:50.393 "claimed": true, 00:17:50.393 "claim_type": "exclusive_write", 00:17:50.393 "zoned": false, 00:17:50.393 "supported_io_types": { 00:17:50.393 "read": true, 00:17:50.393 "write": true, 00:17:50.393 "unmap": true, 00:17:50.393 "flush": true, 00:17:50.393 "reset": true, 00:17:50.393 "nvme_admin": false, 00:17:50.393 "nvme_io": false, 00:17:50.393 "nvme_io_md": false, 00:17:50.393 "write_zeroes": true, 00:17:50.393 "zcopy": true, 00:17:50.393 "get_zone_info": false, 00:17:50.393 "zone_management": false, 00:17:50.393 "zone_append": false, 00:17:50.393 "compare": false, 00:17:50.393 "compare_and_write": false, 00:17:50.393 "abort": true, 00:17:50.393 "seek_hole": false, 00:17:50.393 "seek_data": false, 00:17:50.393 "copy": true, 00:17:50.393 "nvme_iov_md": false 00:17:50.393 }, 00:17:50.393 "memory_domains": [ 00:17:50.393 { 00:17:50.393 "dma_device_id": "system", 00:17:50.393 "dma_device_type": 1 00:17:50.393 }, 00:17:50.393 { 00:17:50.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.393 "dma_device_type": 2 00:17:50.393 } 00:17:50.393 ], 00:17:50.393 "driver_specific": {} 00:17:50.393 } 00:17:50.393 ] 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.394 "name": "Existed_Raid", 00:17:50.394 "uuid": "a4cda4fd-6959-4ca3-860b-56d4c86cdd7d", 00:17:50.394 "strip_size_kb": 0, 00:17:50.394 "state": "configuring", 00:17:50.394 "raid_level": "raid1", 00:17:50.394 "superblock": true, 00:17:50.394 "num_base_bdevs": 2, 00:17:50.394 "num_base_bdevs_discovered": 1, 00:17:50.394 "num_base_bdevs_operational": 2, 00:17:50.394 "base_bdevs_list": [ 00:17:50.394 { 00:17:50.394 "name": "BaseBdev1", 00:17:50.394 "uuid": "982f21f9-a6ed-4cc2-bd8f-58dd22776386", 00:17:50.394 "is_configured": true, 00:17:50.394 "data_offset": 256, 00:17:50.394 "data_size": 7936 00:17:50.394 }, 00:17:50.394 { 00:17:50.394 "name": "BaseBdev2", 00:17:50.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.394 "is_configured": false, 00:17:50.394 "data_offset": 0, 00:17:50.394 "data_size": 0 00:17:50.394 } 00:17:50.394 ] 00:17:50.394 }' 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.394 05:55:57 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.964 [2024-12-12 05:55:58.184257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:50.964 [2024-12-12 05:55:58.184302] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.964 [2024-12-12 05:55:58.192280] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.964 [2024-12-12 05:55:58.194090] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:50.964 [2024-12-12 05:55:58.194185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:50.964 "name": "Existed_Raid", 00:17:50.964 "uuid": "68ae0ce3-8452-4dbe-98c9-acf079cc64fe", 00:17:50.964 "strip_size_kb": 0, 00:17:50.964 "state": "configuring", 00:17:50.964 "raid_level": "raid1", 00:17:50.964 "superblock": true, 00:17:50.964 "num_base_bdevs": 2, 00:17:50.964 "num_base_bdevs_discovered": 1, 00:17:50.964 "num_base_bdevs_operational": 2, 00:17:50.964 "base_bdevs_list": [ 00:17:50.964 { 00:17:50.964 "name": "BaseBdev1", 00:17:50.964 "uuid": "982f21f9-a6ed-4cc2-bd8f-58dd22776386", 00:17:50.964 "is_configured": true, 00:17:50.964 "data_offset": 256, 00:17:50.964 "data_size": 7936 00:17:50.964 }, 00:17:50.964 { 00:17:50.964 "name": "BaseBdev2", 00:17:50.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.964 "is_configured": false, 00:17:50.964 "data_offset": 0, 00:17:50.964 "data_size": 0 00:17:50.964 } 00:17:50.964 ] 00:17:50.964 }' 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:50.964 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.224 [2024-12-12 05:55:58.734159] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:51.224 [2024-12-12 05:55:58.734542] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:51.224 [2024-12-12 05:55:58.734561] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:51.224 [2024-12-12 05:55:58.734658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:51.224 [2024-12-12 05:55:58.734799] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:51.224 [2024-12-12 05:55:58.734812] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:51.224 [2024-12-12 05:55:58.734907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:51.224 BaseBdev2 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.224 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.484 [ 00:17:51.484 { 00:17:51.484 "name": "BaseBdev2", 00:17:51.484 "aliases": [ 00:17:51.484 "435e3da4-9a7d-4073-9951-3375f7e55f6d" 00:17:51.484 ], 00:17:51.484 "product_name": "Malloc disk", 00:17:51.484 "block_size": 4096, 00:17:51.484 "num_blocks": 8192, 00:17:51.484 "uuid": "435e3da4-9a7d-4073-9951-3375f7e55f6d", 00:17:51.484 "md_size": 32, 00:17:51.484 "md_interleave": false, 00:17:51.484 "dif_type": 0, 00:17:51.484 "assigned_rate_limits": { 00:17:51.484 "rw_ios_per_sec": 0, 00:17:51.484 "rw_mbytes_per_sec": 0, 00:17:51.484 "r_mbytes_per_sec": 0, 00:17:51.484 "w_mbytes_per_sec": 0 00:17:51.484 }, 00:17:51.484 "claimed": true, 00:17:51.484 "claim_type": "exclusive_write", 00:17:51.484 "zoned": false, 00:17:51.484 "supported_io_types": { 00:17:51.484 "read": true, 00:17:51.484 "write": true, 00:17:51.484 "unmap": true, 00:17:51.484 "flush": true, 00:17:51.484 "reset": true, 00:17:51.484 "nvme_admin": false, 00:17:51.484 "nvme_io": false, 00:17:51.484 "nvme_io_md": false, 00:17:51.484 "write_zeroes": true, 00:17:51.484 "zcopy": true, 00:17:51.484 "get_zone_info": false, 00:17:51.484 "zone_management": false, 00:17:51.484 "zone_append": false, 00:17:51.484 "compare": false, 00:17:51.484 "compare_and_write": false, 00:17:51.484 "abort": true, 00:17:51.484 "seek_hole": false, 00:17:51.484 "seek_data": false, 00:17:51.484 "copy": true, 00:17:51.484 "nvme_iov_md": false 00:17:51.484 }, 00:17:51.484 "memory_domains": [ 00:17:51.484 { 00:17:51.484 "dma_device_id": "system", 00:17:51.484 "dma_device_type": 1 00:17:51.484 }, 00:17:51.484 { 00:17:51.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:51.484 "dma_device_type": 2 00:17:51.484 } 00:17:51.484 ], 00:17:51.484 "driver_specific": {} 00:17:51.484 } 00:17:51.484 ] 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:51.484 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:51.485 "name": "Existed_Raid", 00:17:51.485 "uuid": "68ae0ce3-8452-4dbe-98c9-acf079cc64fe", 00:17:51.485 "strip_size_kb": 0, 00:17:51.485 "state": "online", 00:17:51.485 "raid_level": "raid1", 00:17:51.485 "superblock": true, 00:17:51.485 "num_base_bdevs": 2, 00:17:51.485 "num_base_bdevs_discovered": 2, 00:17:51.485 "num_base_bdevs_operational": 2, 00:17:51.485 "base_bdevs_list": [ 00:17:51.485 { 00:17:51.485 "name": "BaseBdev1", 00:17:51.485 "uuid": "982f21f9-a6ed-4cc2-bd8f-58dd22776386", 00:17:51.485 "is_configured": true, 00:17:51.485 "data_offset": 256, 00:17:51.485 "data_size": 7936 00:17:51.485 }, 00:17:51.485 { 00:17:51.485 "name": "BaseBdev2", 00:17:51.485 "uuid": "435e3da4-9a7d-4073-9951-3375f7e55f6d", 00:17:51.485 "is_configured": true, 00:17:51.485 "data_offset": 256, 00:17:51.485 "data_size": 7936 00:17:51.485 } 00:17:51.485 ] 00:17:51.485 }' 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:51.485 05:55:58 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.745 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:51.745 [2024-12-12 05:55:59.253595] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:52.005 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.005 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:52.005 "name": "Existed_Raid", 00:17:52.005 "aliases": [ 00:17:52.005 "68ae0ce3-8452-4dbe-98c9-acf079cc64fe" 00:17:52.005 ], 00:17:52.005 "product_name": "Raid Volume", 00:17:52.005 "block_size": 4096, 00:17:52.005 "num_blocks": 7936, 00:17:52.005 "uuid": "68ae0ce3-8452-4dbe-98c9-acf079cc64fe", 00:17:52.005 "md_size": 32, 00:17:52.005 "md_interleave": false, 00:17:52.005 "dif_type": 0, 00:17:52.005 "assigned_rate_limits": { 00:17:52.005 "rw_ios_per_sec": 0, 00:17:52.005 "rw_mbytes_per_sec": 0, 00:17:52.005 "r_mbytes_per_sec": 0, 00:17:52.005 "w_mbytes_per_sec": 0 00:17:52.005 }, 00:17:52.005 "claimed": false, 00:17:52.005 "zoned": false, 00:17:52.005 "supported_io_types": { 00:17:52.005 "read": true, 00:17:52.005 "write": true, 00:17:52.005 "unmap": false, 00:17:52.005 "flush": false, 00:17:52.005 "reset": true, 00:17:52.005 "nvme_admin": false, 00:17:52.005 "nvme_io": false, 00:17:52.005 "nvme_io_md": false, 00:17:52.005 "write_zeroes": true, 00:17:52.005 "zcopy": false, 00:17:52.005 "get_zone_info": false, 00:17:52.005 "zone_management": false, 00:17:52.005 "zone_append": false, 00:17:52.005 "compare": false, 00:17:52.005 "compare_and_write": false, 00:17:52.005 "abort": false, 00:17:52.005 "seek_hole": false, 00:17:52.005 "seek_data": false, 00:17:52.005 "copy": false, 00:17:52.005 "nvme_iov_md": false 00:17:52.005 }, 00:17:52.005 "memory_domains": [ 00:17:52.005 { 00:17:52.005 "dma_device_id": "system", 00:17:52.005 "dma_device_type": 1 00:17:52.005 }, 00:17:52.005 { 00:17:52.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.005 "dma_device_type": 2 00:17:52.005 }, 00:17:52.005 { 00:17:52.005 "dma_device_id": "system", 00:17:52.005 "dma_device_type": 1 00:17:52.005 }, 00:17:52.005 { 00:17:52.005 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:52.005 "dma_device_type": 2 00:17:52.005 } 00:17:52.005 ], 00:17:52.005 "driver_specific": { 00:17:52.005 "raid": { 00:17:52.005 "uuid": "68ae0ce3-8452-4dbe-98c9-acf079cc64fe", 00:17:52.005 "strip_size_kb": 0, 00:17:52.005 "state": "online", 00:17:52.005 "raid_level": "raid1", 00:17:52.005 "superblock": true, 00:17:52.005 "num_base_bdevs": 2, 00:17:52.005 "num_base_bdevs_discovered": 2, 00:17:52.005 "num_base_bdevs_operational": 2, 00:17:52.005 "base_bdevs_list": [ 00:17:52.005 { 00:17:52.005 "name": "BaseBdev1", 00:17:52.005 "uuid": "982f21f9-a6ed-4cc2-bd8f-58dd22776386", 00:17:52.006 "is_configured": true, 00:17:52.006 "data_offset": 256, 00:17:52.006 "data_size": 7936 00:17:52.006 }, 00:17:52.006 { 00:17:52.006 "name": "BaseBdev2", 00:17:52.006 "uuid": "435e3da4-9a7d-4073-9951-3375f7e55f6d", 00:17:52.006 "is_configured": true, 00:17:52.006 "data_offset": 256, 00:17:52.006 "data_size": 7936 00:17:52.006 } 00:17:52.006 ] 00:17:52.006 } 00:17:52.006 } 00:17:52.006 }' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:52.006 BaseBdev2' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.006 [2024-12-12 05:55:59.413075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.006 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.266 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.266 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:52.266 "name": "Existed_Raid", 00:17:52.266 "uuid": "68ae0ce3-8452-4dbe-98c9-acf079cc64fe", 00:17:52.266 "strip_size_kb": 0, 00:17:52.266 "state": "online", 00:17:52.266 "raid_level": "raid1", 00:17:52.266 "superblock": true, 00:17:52.266 "num_base_bdevs": 2, 00:17:52.266 "num_base_bdevs_discovered": 1, 00:17:52.266 "num_base_bdevs_operational": 1, 00:17:52.266 "base_bdevs_list": [ 00:17:52.266 { 00:17:52.266 "name": null, 00:17:52.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.266 "is_configured": false, 00:17:52.266 "data_offset": 0, 00:17:52.266 "data_size": 7936 00:17:52.266 }, 00:17:52.266 { 00:17:52.266 "name": "BaseBdev2", 00:17:52.266 "uuid": "435e3da4-9a7d-4073-9951-3375f7e55f6d", 00:17:52.266 "is_configured": true, 00:17:52.266 "data_offset": 256, 00:17:52.266 "data_size": 7936 00:17:52.266 } 00:17:52.266 ] 00:17:52.266 }' 00:17:52.266 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:52.266 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.526 05:55:59 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.526 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:52.526 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:52.526 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:52.526 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.526 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.526 [2024-12-12 05:56:00.032421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:52.526 [2024-12-12 05:56:00.032601] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:52.786 [2024-12-12 05:56:00.127291] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:52.786 [2024-12-12 05:56:00.127417] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:52.786 [2024-12-12 05:56:00.127459] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 86526 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86526 ']' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 86526 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86526 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:52.786 killing process with pid 86526 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86526' 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 86526 00:17:52.786 [2024-12-12 05:56:00.220144] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:52.786 05:56:00 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 86526 00:17:52.786 [2024-12-12 05:56:00.235146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:54.168 ************************************ 00:17:54.168 END TEST raid_state_function_test_sb_md_separate 00:17:54.168 ************************************ 00:17:54.168 05:56:01 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:17:54.168 00:17:54.168 real 0m5.023s 00:17:54.168 user 0m7.163s 00:17:54.168 sys 0m0.945s 00:17:54.168 05:56:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.168 05:56:01 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.168 05:56:01 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:54.168 05:56:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:54.168 05:56:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.168 05:56:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:54.168 ************************************ 00:17:54.168 START TEST raid_superblock_test_md_separate 00:17:54.168 ************************************ 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=86751 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 86751 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 86751 ']' 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.168 05:56:01 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.168 [2024-12-12 05:56:01.448260] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:17:54.168 [2024-12-12 05:56:01.448574] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86751 ] 00:17:54.168 [2024-12-12 05:56:01.618542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.428 [2024-12-12 05:56:01.727883] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.428 [2024-12-12 05:56:01.897810] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.428 [2024-12-12 05:56:01.897951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 malloc1 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 [2024-12-12 05:56:02.306442] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:54.999 [2024-12-12 05:56:02.306615] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.999 [2024-12-12 05:56:02.306653] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:54.999 [2024-12-12 05:56:02.306663] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.999 [2024-12-12 05:56:02.308535] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.999 [2024-12-12 05:56:02.308572] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:54.999 pt1 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 malloc2 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:54.999 [2024-12-12 05:56:02.360284] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:54.999 [2024-12-12 05:56:02.360418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.999 [2024-12-12 05:56:02.360454] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:54.999 [2024-12-12 05:56:02.360480] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.999 [2024-12-12 05:56:02.362327] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.999 [2024-12-12 05:56:02.362398] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:54.999 pt2 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:54.999 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.000 [2024-12-12 05:56:02.372286] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.000 [2024-12-12 05:56:02.374087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:55.000 [2024-12-12 05:56:02.374314] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:55.000 [2024-12-12 05:56:02.374370] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:55.000 [2024-12-12 05:56:02.374476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:17:55.000 [2024-12-12 05:56:02.374675] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:55.000 [2024-12-12 05:56:02.374724] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:55.000 [2024-12-12 05:56:02.374899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:55.000 "name": "raid_bdev1", 00:17:55.000 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:55.000 "strip_size_kb": 0, 00:17:55.000 "state": "online", 00:17:55.000 "raid_level": "raid1", 00:17:55.000 "superblock": true, 00:17:55.000 "num_base_bdevs": 2, 00:17:55.000 "num_base_bdevs_discovered": 2, 00:17:55.000 "num_base_bdevs_operational": 2, 00:17:55.000 "base_bdevs_list": [ 00:17:55.000 { 00:17:55.000 "name": "pt1", 00:17:55.000 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.000 "is_configured": true, 00:17:55.000 "data_offset": 256, 00:17:55.000 "data_size": 7936 00:17:55.000 }, 00:17:55.000 { 00:17:55.000 "name": "pt2", 00:17:55.000 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.000 "is_configured": true, 00:17:55.000 "data_offset": 256, 00:17:55.000 "data_size": 7936 00:17:55.000 } 00:17:55.000 ] 00:17:55.000 }' 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:55.000 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:55.571 [2024-12-12 05:56:02.819761] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:55.571 "name": "raid_bdev1", 00:17:55.571 "aliases": [ 00:17:55.571 "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d" 00:17:55.571 ], 00:17:55.571 "product_name": "Raid Volume", 00:17:55.571 "block_size": 4096, 00:17:55.571 "num_blocks": 7936, 00:17:55.571 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:55.571 "md_size": 32, 00:17:55.571 "md_interleave": false, 00:17:55.571 "dif_type": 0, 00:17:55.571 "assigned_rate_limits": { 00:17:55.571 "rw_ios_per_sec": 0, 00:17:55.571 "rw_mbytes_per_sec": 0, 00:17:55.571 "r_mbytes_per_sec": 0, 00:17:55.571 "w_mbytes_per_sec": 0 00:17:55.571 }, 00:17:55.571 "claimed": false, 00:17:55.571 "zoned": false, 00:17:55.571 "supported_io_types": { 00:17:55.571 "read": true, 00:17:55.571 "write": true, 00:17:55.571 "unmap": false, 00:17:55.571 "flush": false, 00:17:55.571 "reset": true, 00:17:55.571 "nvme_admin": false, 00:17:55.571 "nvme_io": false, 00:17:55.571 "nvme_io_md": false, 00:17:55.571 "write_zeroes": true, 00:17:55.571 "zcopy": false, 00:17:55.571 "get_zone_info": false, 00:17:55.571 "zone_management": false, 00:17:55.571 "zone_append": false, 00:17:55.571 "compare": false, 00:17:55.571 "compare_and_write": false, 00:17:55.571 "abort": false, 00:17:55.571 "seek_hole": false, 00:17:55.571 "seek_data": false, 00:17:55.571 "copy": false, 00:17:55.571 "nvme_iov_md": false 00:17:55.571 }, 00:17:55.571 "memory_domains": [ 00:17:55.571 { 00:17:55.571 "dma_device_id": "system", 00:17:55.571 "dma_device_type": 1 00:17:55.571 }, 00:17:55.571 { 00:17:55.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.571 "dma_device_type": 2 00:17:55.571 }, 00:17:55.571 { 00:17:55.571 "dma_device_id": "system", 00:17:55.571 "dma_device_type": 1 00:17:55.571 }, 00:17:55.571 { 00:17:55.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:55.571 "dma_device_type": 2 00:17:55.571 } 00:17:55.571 ], 00:17:55.571 "driver_specific": { 00:17:55.571 "raid": { 00:17:55.571 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:55.571 "strip_size_kb": 0, 00:17:55.571 "state": "online", 00:17:55.571 "raid_level": "raid1", 00:17:55.571 "superblock": true, 00:17:55.571 "num_base_bdevs": 2, 00:17:55.571 "num_base_bdevs_discovered": 2, 00:17:55.571 "num_base_bdevs_operational": 2, 00:17:55.571 "base_bdevs_list": [ 00:17:55.571 { 00:17:55.571 "name": "pt1", 00:17:55.571 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:55.571 "is_configured": true, 00:17:55.571 "data_offset": 256, 00:17:55.571 "data_size": 7936 00:17:55.571 }, 00:17:55.571 { 00:17:55.571 "name": "pt2", 00:17:55.571 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:55.571 "is_configured": true, 00:17:55.571 "data_offset": 256, 00:17:55.571 "data_size": 7936 00:17:55.571 } 00:17:55.571 ] 00:17:55.571 } 00:17:55.571 } 00:17:55.571 }' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:55.571 pt2' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.571 05:56:02 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.571 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:55.571 [2024-12-12 05:56:03.075266] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d ']' 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.832 [2024-12-12 05:56:03.122952] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.832 [2024-12-12 05:56:03.123020] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:55.832 [2024-12-12 05:56:03.123109] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:55.832 [2024-12-12 05:56:03.123156] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:55.832 [2024-12-12 05:56:03.123167] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:17:55.832 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.833 [2024-12-12 05:56:03.262726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:55.833 [2024-12-12 05:56:03.264469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:55.833 [2024-12-12 05:56:03.264556] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:55.833 [2024-12-12 05:56:03.264605] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:55.833 [2024-12-12 05:56:03.264619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:55.833 [2024-12-12 05:56:03.264628] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:55.833 request: 00:17:55.833 { 00:17:55.833 "name": "raid_bdev1", 00:17:55.833 "raid_level": "raid1", 00:17:55.833 "base_bdevs": [ 00:17:55.833 "malloc1", 00:17:55.833 "malloc2" 00:17:55.833 ], 00:17:55.833 "superblock": false, 00:17:55.833 "method": "bdev_raid_create", 00:17:55.833 "req_id": 1 00:17:55.833 } 00:17:55.833 Got JSON-RPC error response 00:17:55.833 response: 00:17:55.833 { 00:17:55.833 "code": -17, 00:17:55.833 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:55.833 } 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:55.833 [2024-12-12 05:56:03.326719] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:55.833 [2024-12-12 05:56:03.326832] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:55.833 [2024-12-12 05:56:03.326863] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:55.833 [2024-12-12 05:56:03.326890] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:55.833 [2024-12-12 05:56:03.328751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:55.833 [2024-12-12 05:56:03.328821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:55.833 [2024-12-12 05:56:03.328896] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:55.833 [2024-12-12 05:56:03.328958] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:55.833 pt1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.833 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.093 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.093 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.093 "name": "raid_bdev1", 00:17:56.093 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:56.093 "strip_size_kb": 0, 00:17:56.093 "state": "configuring", 00:17:56.093 "raid_level": "raid1", 00:17:56.093 "superblock": true, 00:17:56.093 "num_base_bdevs": 2, 00:17:56.093 "num_base_bdevs_discovered": 1, 00:17:56.093 "num_base_bdevs_operational": 2, 00:17:56.093 "base_bdevs_list": [ 00:17:56.093 { 00:17:56.093 "name": "pt1", 00:17:56.093 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.093 "is_configured": true, 00:17:56.093 "data_offset": 256, 00:17:56.093 "data_size": 7936 00:17:56.093 }, 00:17:56.093 { 00:17:56.093 "name": null, 00:17:56.093 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.093 "is_configured": false, 00:17:56.093 "data_offset": 256, 00:17:56.093 "data_size": 7936 00:17:56.093 } 00:17:56.093 ] 00:17:56.093 }' 00:17:56.093 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.093 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.354 [2024-12-12 05:56:03.777936] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:56.354 [2024-12-12 05:56:03.777995] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:56.354 [2024-12-12 05:56:03.778011] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:17:56.354 [2024-12-12 05:56:03.778021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:56.354 [2024-12-12 05:56:03.778161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:56.354 [2024-12-12 05:56:03.778175] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:56.354 [2024-12-12 05:56:03.778210] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:56.354 [2024-12-12 05:56:03.778226] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:56.354 [2024-12-12 05:56:03.778319] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:56.354 [2024-12-12 05:56:03.778329] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:56.354 [2024-12-12 05:56:03.778392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:56.354 [2024-12-12 05:56:03.778487] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:56.354 [2024-12-12 05:56:03.778495] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:56.354 [2024-12-12 05:56:03.778591] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.354 pt2 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.354 "name": "raid_bdev1", 00:17:56.354 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:56.354 "strip_size_kb": 0, 00:17:56.354 "state": "online", 00:17:56.354 "raid_level": "raid1", 00:17:56.354 "superblock": true, 00:17:56.354 "num_base_bdevs": 2, 00:17:56.354 "num_base_bdevs_discovered": 2, 00:17:56.354 "num_base_bdevs_operational": 2, 00:17:56.354 "base_bdevs_list": [ 00:17:56.354 { 00:17:56.354 "name": "pt1", 00:17:56.354 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.354 "is_configured": true, 00:17:56.354 "data_offset": 256, 00:17:56.354 "data_size": 7936 00:17:56.354 }, 00:17:56.354 { 00:17:56.354 "name": "pt2", 00:17:56.354 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.354 "is_configured": true, 00:17:56.354 "data_offset": 256, 00:17:56.354 "data_size": 7936 00:17:56.354 } 00:17:56.354 ] 00:17:56.354 }' 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.354 05:56:03 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:56.924 [2024-12-12 05:56:04.277334] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.924 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:56.924 "name": "raid_bdev1", 00:17:56.924 "aliases": [ 00:17:56.924 "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d" 00:17:56.924 ], 00:17:56.924 "product_name": "Raid Volume", 00:17:56.924 "block_size": 4096, 00:17:56.924 "num_blocks": 7936, 00:17:56.924 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:56.924 "md_size": 32, 00:17:56.924 "md_interleave": false, 00:17:56.924 "dif_type": 0, 00:17:56.924 "assigned_rate_limits": { 00:17:56.924 "rw_ios_per_sec": 0, 00:17:56.924 "rw_mbytes_per_sec": 0, 00:17:56.924 "r_mbytes_per_sec": 0, 00:17:56.924 "w_mbytes_per_sec": 0 00:17:56.924 }, 00:17:56.924 "claimed": false, 00:17:56.924 "zoned": false, 00:17:56.924 "supported_io_types": { 00:17:56.924 "read": true, 00:17:56.924 "write": true, 00:17:56.924 "unmap": false, 00:17:56.924 "flush": false, 00:17:56.924 "reset": true, 00:17:56.924 "nvme_admin": false, 00:17:56.924 "nvme_io": false, 00:17:56.924 "nvme_io_md": false, 00:17:56.924 "write_zeroes": true, 00:17:56.924 "zcopy": false, 00:17:56.924 "get_zone_info": false, 00:17:56.924 "zone_management": false, 00:17:56.924 "zone_append": false, 00:17:56.924 "compare": false, 00:17:56.924 "compare_and_write": false, 00:17:56.924 "abort": false, 00:17:56.924 "seek_hole": false, 00:17:56.924 "seek_data": false, 00:17:56.924 "copy": false, 00:17:56.924 "nvme_iov_md": false 00:17:56.924 }, 00:17:56.924 "memory_domains": [ 00:17:56.924 { 00:17:56.924 "dma_device_id": "system", 00:17:56.924 "dma_device_type": 1 00:17:56.924 }, 00:17:56.924 { 00:17:56.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.924 "dma_device_type": 2 00:17:56.924 }, 00:17:56.924 { 00:17:56.924 "dma_device_id": "system", 00:17:56.924 "dma_device_type": 1 00:17:56.924 }, 00:17:56.924 { 00:17:56.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:56.924 "dma_device_type": 2 00:17:56.924 } 00:17:56.924 ], 00:17:56.924 "driver_specific": { 00:17:56.924 "raid": { 00:17:56.924 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:56.924 "strip_size_kb": 0, 00:17:56.924 "state": "online", 00:17:56.924 "raid_level": "raid1", 00:17:56.924 "superblock": true, 00:17:56.924 "num_base_bdevs": 2, 00:17:56.924 "num_base_bdevs_discovered": 2, 00:17:56.924 "num_base_bdevs_operational": 2, 00:17:56.924 "base_bdevs_list": [ 00:17:56.924 { 00:17:56.924 "name": "pt1", 00:17:56.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:56.924 "is_configured": true, 00:17:56.924 "data_offset": 256, 00:17:56.924 "data_size": 7936 00:17:56.924 }, 00:17:56.924 { 00:17:56.924 "name": "pt2", 00:17:56.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:56.924 "is_configured": true, 00:17:56.924 "data_offset": 256, 00:17:56.924 "data_size": 7936 00:17:56.924 } 00:17:56.924 ] 00:17:56.924 } 00:17:56.925 } 00:17:56.925 }' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:56.925 pt2' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.925 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 [2024-12-12 05:56:04.532896] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d '!=' 0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d ']' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 [2024-12-12 05:56:04.564641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.185 "name": "raid_bdev1", 00:17:57.185 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:57.185 "strip_size_kb": 0, 00:17:57.185 "state": "online", 00:17:57.185 "raid_level": "raid1", 00:17:57.185 "superblock": true, 00:17:57.185 "num_base_bdevs": 2, 00:17:57.185 "num_base_bdevs_discovered": 1, 00:17:57.185 "num_base_bdevs_operational": 1, 00:17:57.185 "base_bdevs_list": [ 00:17:57.185 { 00:17:57.185 "name": null, 00:17:57.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.185 "is_configured": false, 00:17:57.185 "data_offset": 0, 00:17:57.185 "data_size": 7936 00:17:57.185 }, 00:17:57.185 { 00:17:57.185 "name": "pt2", 00:17:57.185 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.185 "is_configured": true, 00:17:57.185 "data_offset": 256, 00:17:57.185 "data_size": 7936 00:17:57.185 } 00:17:57.185 ] 00:17:57.185 }' 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.185 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.757 [2024-12-12 05:56:04.979921] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:57.757 [2024-12-12 05:56:04.979992] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:57.757 [2024-12-12 05:56:04.980061] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:57.757 [2024-12-12 05:56:04.980112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:57.757 [2024-12-12 05:56:04.980188] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:57.757 05:56:04 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.757 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.757 [2024-12-12 05:56:05.055791] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:57.757 [2024-12-12 05:56:05.055898] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:57.758 [2024-12-12 05:56:05.055927] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:17:57.758 [2024-12-12 05:56:05.055955] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:57.758 [2024-12-12 05:56:05.057871] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:57.758 [2024-12-12 05:56:05.057943] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:57.758 [2024-12-12 05:56:05.058025] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:57.758 [2024-12-12 05:56:05.058106] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:57.758 [2024-12-12 05:56:05.058242] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:57.758 [2024-12-12 05:56:05.058281] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:57.758 [2024-12-12 05:56:05.058400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:57.758 [2024-12-12 05:56:05.058549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:57.758 [2024-12-12 05:56:05.058588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:57.758 [2024-12-12 05:56:05.058740] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:57.758 pt2 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:57.758 "name": "raid_bdev1", 00:17:57.758 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:57.758 "strip_size_kb": 0, 00:17:57.758 "state": "online", 00:17:57.758 "raid_level": "raid1", 00:17:57.758 "superblock": true, 00:17:57.758 "num_base_bdevs": 2, 00:17:57.758 "num_base_bdevs_discovered": 1, 00:17:57.758 "num_base_bdevs_operational": 1, 00:17:57.758 "base_bdevs_list": [ 00:17:57.758 { 00:17:57.758 "name": null, 00:17:57.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:57.758 "is_configured": false, 00:17:57.758 "data_offset": 256, 00:17:57.758 "data_size": 7936 00:17:57.758 }, 00:17:57.758 { 00:17:57.758 "name": "pt2", 00:17:57.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:57.758 "is_configured": true, 00:17:57.758 "data_offset": 256, 00:17:57.758 "data_size": 7936 00:17:57.758 } 00:17:57.758 ] 00:17:57.758 }' 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:57.758 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.018 [2024-12-12 05:56:05.499016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.018 [2024-12-12 05:56:05.499041] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:58.018 [2024-12-12 05:56:05.499085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.018 [2024-12-12 05:56:05.499121] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.018 [2024-12-12 05:56:05.499128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:58.018 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.278 [2024-12-12 05:56:05.558950] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.278 [2024-12-12 05:56:05.558996] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.278 [2024-12-12 05:56:05.559013] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:17:58.278 [2024-12-12 05:56:05.559021] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.278 [2024-12-12 05:56:05.560828] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.278 [2024-12-12 05:56:05.560864] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.278 [2024-12-12 05:56:05.560907] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:58.278 [2024-12-12 05:56:05.560955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:58.278 [2024-12-12 05:56:05.561077] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:58.278 [2024-12-12 05:56:05.561086] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:58.278 [2024-12-12 05:56:05.561099] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:58.278 [2024-12-12 05:56:05.561167] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:58.278 [2024-12-12 05:56:05.561224] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:58.278 [2024-12-12 05:56:05.561231] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:58.278 [2024-12-12 05:56:05.561282] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:58.278 [2024-12-12 05:56:05.561433] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:58.278 [2024-12-12 05:56:05.561451] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:58.278 [2024-12-12 05:56:05.561564] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.278 pt1 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.278 "name": "raid_bdev1", 00:17:58.278 "uuid": "0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d", 00:17:58.278 "strip_size_kb": 0, 00:17:58.278 "state": "online", 00:17:58.278 "raid_level": "raid1", 00:17:58.278 "superblock": true, 00:17:58.278 "num_base_bdevs": 2, 00:17:58.278 "num_base_bdevs_discovered": 1, 00:17:58.278 "num_base_bdevs_operational": 1, 00:17:58.278 "base_bdevs_list": [ 00:17:58.278 { 00:17:58.278 "name": null, 00:17:58.278 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.278 "is_configured": false, 00:17:58.278 "data_offset": 256, 00:17:58.278 "data_size": 7936 00:17:58.278 }, 00:17:58.278 { 00:17:58.278 "name": "pt2", 00:17:58.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:58.278 "is_configured": true, 00:17:58.278 "data_offset": 256, 00:17:58.278 "data_size": 7936 00:17:58.278 } 00:17:58.278 ] 00:17:58.278 }' 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.278 05:56:05 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.538 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:58.538 [2024-12-12 05:56:06.054383] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d '!=' 0cf81e1c-1197-402a-bb3e-e7beb5a9bb8d ']' 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 86751 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 86751 ']' 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 86751 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86751 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:58.798 killing process with pid 86751 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86751' 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 86751 00:17:58.798 [2024-12-12 05:56:06.134672] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:58.798 [2024-12-12 05:56:06.134734] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:58.798 [2024-12-12 05:56:06.134771] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:58.798 [2024-12-12 05:56:06.134786] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:58.798 05:56:06 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 86751 00:17:59.058 [2024-12-12 05:56:06.342328] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:59.998 ************************************ 00:17:59.998 END TEST raid_superblock_test_md_separate 00:17:59.998 ************************************ 00:17:59.998 05:56:07 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:17:59.998 00:17:59.998 real 0m6.042s 00:17:59.998 user 0m9.187s 00:17:59.998 sys 0m1.068s 00:17:59.998 05:56:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:59.998 05:56:07 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:59.998 05:56:07 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:17:59.998 05:56:07 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:17:59.998 05:56:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:59.998 05:56:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:59.998 05:56:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:59.998 ************************************ 00:17:59.998 START TEST raid_rebuild_test_sb_md_separate 00:17:59.998 ************************************ 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=87039 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 87039 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87039 ']' 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.998 05:56:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:00.259 [2024-12-12 05:56:07.595164] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:00.259 [2024-12-12 05:56:07.595380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87039 ] 00:18:00.259 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:00.259 Zero copy mechanism will not be used. 00:18:00.259 [2024-12-12 05:56:07.772990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.519 [2024-12-12 05:56:07.881146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.778 [2024-12-12 05:56:08.072698] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:00.778 [2024-12-12 05:56:08.072814] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.038 BaseBdev1_malloc 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.038 [2024-12-12 05:56:08.438412] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:01.038 [2024-12-12 05:56:08.438538] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.038 [2024-12-12 05:56:08.438596] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:01.038 [2024-12-12 05:56:08.438628] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.038 [2024-12-12 05:56:08.440571] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.038 [2024-12-12 05:56:08.440639] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:01.038 BaseBdev1 00:18:01.038 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.039 BaseBdev2_malloc 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.039 [2024-12-12 05:56:08.494286] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:01.039 [2024-12-12 05:56:08.494360] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.039 [2024-12-12 05:56:08.494380] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:01.039 [2024-12-12 05:56:08.494392] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.039 [2024-12-12 05:56:08.496234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.039 [2024-12-12 05:56:08.496274] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:01.039 BaseBdev2 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.039 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.299 spare_malloc 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.299 spare_delay 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.299 [2024-12-12 05:56:08.590064] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.299 [2024-12-12 05:56:08.590120] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.299 [2024-12-12 05:56:08.590158] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:01.299 [2024-12-12 05:56:08.590168] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.299 [2024-12-12 05:56:08.592036] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.299 [2024-12-12 05:56:08.592074] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.299 spare 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.299 [2024-12-12 05:56:08.602086] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:01.299 [2024-12-12 05:56:08.603814] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:01.299 [2024-12-12 05:56:08.603990] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:01.299 [2024-12-12 05:56:08.604005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:01.299 [2024-12-12 05:56:08.604071] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:01.299 [2024-12-12 05:56:08.604202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:01.299 [2024-12-12 05:56:08.604211] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:01.299 [2024-12-12 05:56:08.604313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.299 "name": "raid_bdev1", 00:18:01.299 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:01.299 "strip_size_kb": 0, 00:18:01.299 "state": "online", 00:18:01.299 "raid_level": "raid1", 00:18:01.299 "superblock": true, 00:18:01.299 "num_base_bdevs": 2, 00:18:01.299 "num_base_bdevs_discovered": 2, 00:18:01.299 "num_base_bdevs_operational": 2, 00:18:01.299 "base_bdevs_list": [ 00:18:01.299 { 00:18:01.299 "name": "BaseBdev1", 00:18:01.299 "uuid": "64136260-9504-5f71-a505-ad3c64fbf34a", 00:18:01.299 "is_configured": true, 00:18:01.299 "data_offset": 256, 00:18:01.299 "data_size": 7936 00:18:01.299 }, 00:18:01.299 { 00:18:01.299 "name": "BaseBdev2", 00:18:01.299 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:01.299 "is_configured": true, 00:18:01.299 "data_offset": 256, 00:18:01.299 "data_size": 7936 00:18:01.299 } 00:18:01.299 ] 00:18:01.299 }' 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.299 05:56:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.869 [2024-12-12 05:56:09.101429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:01.869 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:01.869 [2024-12-12 05:56:09.372832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:01.869 /dev/nbd0 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:02.129 1+0 records in 00:18:02.129 1+0 records out 00:18:02.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528568 s, 7.7 MB/s 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:02.129 05:56:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:02.699 7936+0 records in 00:18:02.699 7936+0 records out 00:18:02.699 32505856 bytes (33 MB, 31 MiB) copied, 0.59826 s, 54.3 MB/s 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:02.699 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:02.959 [2024-12-12 05:56:10.267607] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.959 [2024-12-12 05:56:10.295657] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.959 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.960 "name": "raid_bdev1", 00:18:02.960 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:02.960 "strip_size_kb": 0, 00:18:02.960 "state": "online", 00:18:02.960 "raid_level": "raid1", 00:18:02.960 "superblock": true, 00:18:02.960 "num_base_bdevs": 2, 00:18:02.960 "num_base_bdevs_discovered": 1, 00:18:02.960 "num_base_bdevs_operational": 1, 00:18:02.960 "base_bdevs_list": [ 00:18:02.960 { 00:18:02.960 "name": null, 00:18:02.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.960 "is_configured": false, 00:18:02.960 "data_offset": 0, 00:18:02.960 "data_size": 7936 00:18:02.960 }, 00:18:02.960 { 00:18:02.960 "name": "BaseBdev2", 00:18:02.960 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:02.960 "is_configured": true, 00:18:02.960 "data_offset": 256, 00:18:02.960 "data_size": 7936 00:18:02.960 } 00:18:02.960 ] 00:18:02.960 }' 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.960 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.530 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:03.530 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.530 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:03.530 [2024-12-12 05:56:10.762896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:03.530 [2024-12-12 05:56:10.776789] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:03.530 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.530 05:56:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:03.530 [2024-12-12 05:56:10.778546] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.470 "name": "raid_bdev1", 00:18:04.470 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:04.470 "strip_size_kb": 0, 00:18:04.470 "state": "online", 00:18:04.470 "raid_level": "raid1", 00:18:04.470 "superblock": true, 00:18:04.470 "num_base_bdevs": 2, 00:18:04.470 "num_base_bdevs_discovered": 2, 00:18:04.470 "num_base_bdevs_operational": 2, 00:18:04.470 "process": { 00:18:04.470 "type": "rebuild", 00:18:04.470 "target": "spare", 00:18:04.470 "progress": { 00:18:04.470 "blocks": 2560, 00:18:04.470 "percent": 32 00:18:04.470 } 00:18:04.470 }, 00:18:04.470 "base_bdevs_list": [ 00:18:04.470 { 00:18:04.470 "name": "spare", 00:18:04.470 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:04.470 "is_configured": true, 00:18:04.470 "data_offset": 256, 00:18:04.470 "data_size": 7936 00:18:04.470 }, 00:18:04.470 { 00:18:04.470 "name": "BaseBdev2", 00:18:04.470 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:04.470 "is_configured": true, 00:18:04.470 "data_offset": 256, 00:18:04.470 "data_size": 7936 00:18:04.470 } 00:18:04.470 ] 00:18:04.470 }' 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.470 05:56:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.470 [2024-12-12 05:56:11.926922] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.470 [2024-12-12 05:56:11.983350] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:04.470 [2024-12-12 05:56:11.983423] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.470 [2024-12-12 05:56:11.983438] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:04.470 [2024-12-12 05:56:11.983450] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:04.730 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.731 "name": "raid_bdev1", 00:18:04.731 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:04.731 "strip_size_kb": 0, 00:18:04.731 "state": "online", 00:18:04.731 "raid_level": "raid1", 00:18:04.731 "superblock": true, 00:18:04.731 "num_base_bdevs": 2, 00:18:04.731 "num_base_bdevs_discovered": 1, 00:18:04.731 "num_base_bdevs_operational": 1, 00:18:04.731 "base_bdevs_list": [ 00:18:04.731 { 00:18:04.731 "name": null, 00:18:04.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.731 "is_configured": false, 00:18:04.731 "data_offset": 0, 00:18:04.731 "data_size": 7936 00:18:04.731 }, 00:18:04.731 { 00:18:04.731 "name": "BaseBdev2", 00:18:04.731 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:04.731 "is_configured": true, 00:18:04.731 "data_offset": 256, 00:18:04.731 "data_size": 7936 00:18:04.731 } 00:18:04.731 ] 00:18:04.731 }' 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.731 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.991 "name": "raid_bdev1", 00:18:04.991 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:04.991 "strip_size_kb": 0, 00:18:04.991 "state": "online", 00:18:04.991 "raid_level": "raid1", 00:18:04.991 "superblock": true, 00:18:04.991 "num_base_bdevs": 2, 00:18:04.991 "num_base_bdevs_discovered": 1, 00:18:04.991 "num_base_bdevs_operational": 1, 00:18:04.991 "base_bdevs_list": [ 00:18:04.991 { 00:18:04.991 "name": null, 00:18:04.991 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.991 "is_configured": false, 00:18:04.991 "data_offset": 0, 00:18:04.991 "data_size": 7936 00:18:04.991 }, 00:18:04.991 { 00:18:04.991 "name": "BaseBdev2", 00:18:04.991 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:04.991 "is_configured": true, 00:18:04.991 "data_offset": 256, 00:18:04.991 "data_size": 7936 00:18:04.991 } 00:18:04.991 ] 00:18:04.991 }' 00:18:04.991 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:05.251 [2024-12-12 05:56:12.605965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.251 [2024-12-12 05:56:12.619855] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.251 05:56:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:05.251 [2024-12-12 05:56:12.621630] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.191 "name": "raid_bdev1", 00:18:06.191 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:06.191 "strip_size_kb": 0, 00:18:06.191 "state": "online", 00:18:06.191 "raid_level": "raid1", 00:18:06.191 "superblock": true, 00:18:06.191 "num_base_bdevs": 2, 00:18:06.191 "num_base_bdevs_discovered": 2, 00:18:06.191 "num_base_bdevs_operational": 2, 00:18:06.191 "process": { 00:18:06.191 "type": "rebuild", 00:18:06.191 "target": "spare", 00:18:06.191 "progress": { 00:18:06.191 "blocks": 2560, 00:18:06.191 "percent": 32 00:18:06.191 } 00:18:06.191 }, 00:18:06.191 "base_bdevs_list": [ 00:18:06.191 { 00:18:06.191 "name": "spare", 00:18:06.191 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:06.191 "is_configured": true, 00:18:06.191 "data_offset": 256, 00:18:06.191 "data_size": 7936 00:18:06.191 }, 00:18:06.191 { 00:18:06.191 "name": "BaseBdev2", 00:18:06.191 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:06.191 "is_configured": true, 00:18:06.191 "data_offset": 256, 00:18:06.191 "data_size": 7936 00:18:06.191 } 00:18:06.191 ] 00:18:06.191 }' 00:18:06.191 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:06.451 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=687 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:06.451 "name": "raid_bdev1", 00:18:06.451 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:06.451 "strip_size_kb": 0, 00:18:06.451 "state": "online", 00:18:06.451 "raid_level": "raid1", 00:18:06.451 "superblock": true, 00:18:06.451 "num_base_bdevs": 2, 00:18:06.451 "num_base_bdevs_discovered": 2, 00:18:06.451 "num_base_bdevs_operational": 2, 00:18:06.451 "process": { 00:18:06.451 "type": "rebuild", 00:18:06.451 "target": "spare", 00:18:06.451 "progress": { 00:18:06.451 "blocks": 2816, 00:18:06.451 "percent": 35 00:18:06.451 } 00:18:06.451 }, 00:18:06.451 "base_bdevs_list": [ 00:18:06.451 { 00:18:06.451 "name": "spare", 00:18:06.451 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:06.451 "is_configured": true, 00:18:06.451 "data_offset": 256, 00:18:06.451 "data_size": 7936 00:18:06.451 }, 00:18:06.451 { 00:18:06.451 "name": "BaseBdev2", 00:18:06.451 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:06.451 "is_configured": true, 00:18:06.451 "data_offset": 256, 00:18:06.451 "data_size": 7936 00:18:06.451 } 00:18:06.451 ] 00:18:06.451 }' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.451 05:56:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:07.832 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.833 "name": "raid_bdev1", 00:18:07.833 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:07.833 "strip_size_kb": 0, 00:18:07.833 "state": "online", 00:18:07.833 "raid_level": "raid1", 00:18:07.833 "superblock": true, 00:18:07.833 "num_base_bdevs": 2, 00:18:07.833 "num_base_bdevs_discovered": 2, 00:18:07.833 "num_base_bdevs_operational": 2, 00:18:07.833 "process": { 00:18:07.833 "type": "rebuild", 00:18:07.833 "target": "spare", 00:18:07.833 "progress": { 00:18:07.833 "blocks": 5888, 00:18:07.833 "percent": 74 00:18:07.833 } 00:18:07.833 }, 00:18:07.833 "base_bdevs_list": [ 00:18:07.833 { 00:18:07.833 "name": "spare", 00:18:07.833 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:07.833 "is_configured": true, 00:18:07.833 "data_offset": 256, 00:18:07.833 "data_size": 7936 00:18:07.833 }, 00:18:07.833 { 00:18:07.833 "name": "BaseBdev2", 00:18:07.833 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:07.833 "is_configured": true, 00:18:07.833 "data_offset": 256, 00:18:07.833 "data_size": 7936 00:18:07.833 } 00:18:07.833 ] 00:18:07.833 }' 00:18:07.833 05:56:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.833 05:56:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:07.833 05:56:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.833 05:56:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:07.833 05:56:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:08.402 [2024-12-12 05:56:15.733802] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:08.403 [2024-12-12 05:56:15.733870] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:08.403 [2024-12-12 05:56:15.733963] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.662 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.662 "name": "raid_bdev1", 00:18:08.662 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:08.662 "strip_size_kb": 0, 00:18:08.662 "state": "online", 00:18:08.663 "raid_level": "raid1", 00:18:08.663 "superblock": true, 00:18:08.663 "num_base_bdevs": 2, 00:18:08.663 "num_base_bdevs_discovered": 2, 00:18:08.663 "num_base_bdevs_operational": 2, 00:18:08.663 "base_bdevs_list": [ 00:18:08.663 { 00:18:08.663 "name": "spare", 00:18:08.663 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:08.663 "is_configured": true, 00:18:08.663 "data_offset": 256, 00:18:08.663 "data_size": 7936 00:18:08.663 }, 00:18:08.663 { 00:18:08.663 "name": "BaseBdev2", 00:18:08.663 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:08.663 "is_configured": true, 00:18:08.663 "data_offset": 256, 00:18:08.663 "data_size": 7936 00:18:08.663 } 00:18:08.663 ] 00:18:08.663 }' 00:18:08.663 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.663 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:08.923 "name": "raid_bdev1", 00:18:08.923 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:08.923 "strip_size_kb": 0, 00:18:08.923 "state": "online", 00:18:08.923 "raid_level": "raid1", 00:18:08.923 "superblock": true, 00:18:08.923 "num_base_bdevs": 2, 00:18:08.923 "num_base_bdevs_discovered": 2, 00:18:08.923 "num_base_bdevs_operational": 2, 00:18:08.923 "base_bdevs_list": [ 00:18:08.923 { 00:18:08.923 "name": "spare", 00:18:08.923 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:08.923 "is_configured": true, 00:18:08.923 "data_offset": 256, 00:18:08.923 "data_size": 7936 00:18:08.923 }, 00:18:08.923 { 00:18:08.923 "name": "BaseBdev2", 00:18:08.923 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:08.923 "is_configured": true, 00:18:08.923 "data_offset": 256, 00:18:08.923 "data_size": 7936 00:18:08.923 } 00:18:08.923 ] 00:18:08.923 }' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.923 "name": "raid_bdev1", 00:18:08.923 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:08.923 "strip_size_kb": 0, 00:18:08.923 "state": "online", 00:18:08.923 "raid_level": "raid1", 00:18:08.923 "superblock": true, 00:18:08.923 "num_base_bdevs": 2, 00:18:08.923 "num_base_bdevs_discovered": 2, 00:18:08.923 "num_base_bdevs_operational": 2, 00:18:08.923 "base_bdevs_list": [ 00:18:08.923 { 00:18:08.923 "name": "spare", 00:18:08.923 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:08.923 "is_configured": true, 00:18:08.923 "data_offset": 256, 00:18:08.923 "data_size": 7936 00:18:08.923 }, 00:18:08.923 { 00:18:08.923 "name": "BaseBdev2", 00:18:08.923 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:08.923 "is_configured": true, 00:18:08.923 "data_offset": 256, 00:18:08.923 "data_size": 7936 00:18:08.923 } 00:18:08.923 ] 00:18:08.923 }' 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.923 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.493 [2024-12-12 05:56:16.779291] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.493 [2024-12-12 05:56:16.779321] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:09.493 [2024-12-12 05:56:16.779395] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:09.493 [2024-12-12 05:56:16.779457] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:09.493 [2024-12-12 05:56:16.779466] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.493 05:56:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:09.752 /dev/nbd0 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:09.752 1+0 records in 00:18:09.752 1+0 records out 00:18:09.752 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265911 s, 15.4 MB/s 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:09.752 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:10.012 /dev/nbd1 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:10.012 1+0 records in 00:18:10.012 1+0 records out 00:18:10.012 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040034 s, 10.2 MB/s 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:10.012 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:10.013 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:10.013 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:10.013 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:10.013 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.013 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:10.281 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 [2024-12-12 05:56:17.898699] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.561 [2024-12-12 05:56:17.898753] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.561 [2024-12-12 05:56:17.898777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:10.561 [2024-12-12 05:56:17.898786] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.561 [2024-12-12 05:56:17.900936] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.561 [2024-12-12 05:56:17.900972] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.561 [2024-12-12 05:56:17.901031] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:10.561 [2024-12-12 05:56:17.901079] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.561 [2024-12-12 05:56:17.901216] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.561 spare 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.561 05:56:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 [2024-12-12 05:56:18.001101] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:10.561 [2024-12-12 05:56:18.001128] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.561 [2024-12-12 05:56:18.001218] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:10.561 [2024-12-12 05:56:18.001346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:10.561 [2024-12-12 05:56:18.001354] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:10.561 [2024-12-12 05:56:18.001464] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.561 "name": "raid_bdev1", 00:18:10.561 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:10.561 "strip_size_kb": 0, 00:18:10.561 "state": "online", 00:18:10.561 "raid_level": "raid1", 00:18:10.561 "superblock": true, 00:18:10.561 "num_base_bdevs": 2, 00:18:10.561 "num_base_bdevs_discovered": 2, 00:18:10.561 "num_base_bdevs_operational": 2, 00:18:10.561 "base_bdevs_list": [ 00:18:10.561 { 00:18:10.561 "name": "spare", 00:18:10.561 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:10.561 "is_configured": true, 00:18:10.561 "data_offset": 256, 00:18:10.561 "data_size": 7936 00:18:10.561 }, 00:18:10.561 { 00:18:10.561 "name": "BaseBdev2", 00:18:10.561 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:10.561 "is_configured": true, 00:18:10.561 "data_offset": 256, 00:18:10.561 "data_size": 7936 00:18:10.561 } 00:18:10.561 ] 00:18:10.561 }' 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.561 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:11.168 "name": "raid_bdev1", 00:18:11.168 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:11.168 "strip_size_kb": 0, 00:18:11.168 "state": "online", 00:18:11.168 "raid_level": "raid1", 00:18:11.168 "superblock": true, 00:18:11.168 "num_base_bdevs": 2, 00:18:11.168 "num_base_bdevs_discovered": 2, 00:18:11.168 "num_base_bdevs_operational": 2, 00:18:11.168 "base_bdevs_list": [ 00:18:11.168 { 00:18:11.168 "name": "spare", 00:18:11.168 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:11.168 "is_configured": true, 00:18:11.168 "data_offset": 256, 00:18:11.168 "data_size": 7936 00:18:11.168 }, 00:18:11.168 { 00:18:11.168 "name": "BaseBdev2", 00:18:11.168 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:11.168 "is_configured": true, 00:18:11.168 "data_offset": 256, 00:18:11.168 "data_size": 7936 00:18:11.168 } 00:18:11.168 ] 00:18:11.168 }' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.168 [2024-12-12 05:56:18.669412] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.168 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.429 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.429 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.429 "name": "raid_bdev1", 00:18:11.429 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:11.429 "strip_size_kb": 0, 00:18:11.429 "state": "online", 00:18:11.429 "raid_level": "raid1", 00:18:11.429 "superblock": true, 00:18:11.429 "num_base_bdevs": 2, 00:18:11.429 "num_base_bdevs_discovered": 1, 00:18:11.429 "num_base_bdevs_operational": 1, 00:18:11.429 "base_bdevs_list": [ 00:18:11.429 { 00:18:11.429 "name": null, 00:18:11.429 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.429 "is_configured": false, 00:18:11.429 "data_offset": 0, 00:18:11.429 "data_size": 7936 00:18:11.429 }, 00:18:11.429 { 00:18:11.429 "name": "BaseBdev2", 00:18:11.429 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:11.429 "is_configured": true, 00:18:11.429 "data_offset": 256, 00:18:11.429 "data_size": 7936 00:18:11.429 } 00:18:11.429 ] 00:18:11.429 }' 00:18:11.429 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.429 05:56:18 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.689 05:56:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:11.689 05:56:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.689 05:56:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:11.689 [2024-12-12 05:56:19.076712] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.689 [2024-12-12 05:56:19.076918] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:11.689 [2024-12-12 05:56:19.076983] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:11.689 [2024-12-12 05:56:19.077047] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:11.689 [2024-12-12 05:56:19.090891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:11.689 05:56:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.689 05:56:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:11.689 [2024-12-12 05:56:19.092705] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:12.630 "name": "raid_bdev1", 00:18:12.630 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:12.630 "strip_size_kb": 0, 00:18:12.630 "state": "online", 00:18:12.630 "raid_level": "raid1", 00:18:12.630 "superblock": true, 00:18:12.630 "num_base_bdevs": 2, 00:18:12.630 "num_base_bdevs_discovered": 2, 00:18:12.630 "num_base_bdevs_operational": 2, 00:18:12.630 "process": { 00:18:12.630 "type": "rebuild", 00:18:12.630 "target": "spare", 00:18:12.630 "progress": { 00:18:12.630 "blocks": 2560, 00:18:12.630 "percent": 32 00:18:12.630 } 00:18:12.630 }, 00:18:12.630 "base_bdevs_list": [ 00:18:12.630 { 00:18:12.630 "name": "spare", 00:18:12.630 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 256, 00:18:12.630 "data_size": 7936 00:18:12.630 }, 00:18:12.630 { 00:18:12.630 "name": "BaseBdev2", 00:18:12.630 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:12.630 "is_configured": true, 00:18:12.630 "data_offset": 256, 00:18:12.630 "data_size": 7936 00:18:12.630 } 00:18:12.630 ] 00:18:12.630 }' 00:18:12.630 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.890 [2024-12-12 05:56:20.256641] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.890 [2024-12-12 05:56:20.297537] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:12.890 [2024-12-12 05:56:20.297597] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.890 [2024-12-12 05:56:20.297610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:12.890 [2024-12-12 05:56:20.297628] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.890 "name": "raid_bdev1", 00:18:12.890 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:12.890 "strip_size_kb": 0, 00:18:12.890 "state": "online", 00:18:12.890 "raid_level": "raid1", 00:18:12.890 "superblock": true, 00:18:12.890 "num_base_bdevs": 2, 00:18:12.890 "num_base_bdevs_discovered": 1, 00:18:12.890 "num_base_bdevs_operational": 1, 00:18:12.890 "base_bdevs_list": [ 00:18:12.890 { 00:18:12.890 "name": null, 00:18:12.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.890 "is_configured": false, 00:18:12.890 "data_offset": 0, 00:18:12.890 "data_size": 7936 00:18:12.890 }, 00:18:12.890 { 00:18:12.890 "name": "BaseBdev2", 00:18:12.890 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:12.890 "is_configured": true, 00:18:12.890 "data_offset": 256, 00:18:12.890 "data_size": 7936 00:18:12.890 } 00:18:12.890 ] 00:18:12.890 }' 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.890 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.460 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:13.460 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.460 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:13.460 [2024-12-12 05:56:20.788394] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.460 [2024-12-12 05:56:20.788504] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.460 [2024-12-12 05:56:20.788554] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:13.460 [2024-12-12 05:56:20.788584] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.460 [2024-12-12 05:56:20.788869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.460 [2024-12-12 05:56:20.788934] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.460 [2024-12-12 05:56:20.789029] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:13.460 [2024-12-12 05:56:20.789069] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.460 [2024-12-12 05:56:20.789114] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:13.460 [2024-12-12 05:56:20.789171] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:13.460 [2024-12-12 05:56:20.802638] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:13.460 spare 00:18:13.460 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.460 05:56:20 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:13.460 [2024-12-12 05:56:20.804458] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:14.400 "name": "raid_bdev1", 00:18:14.400 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:14.400 "strip_size_kb": 0, 00:18:14.400 "state": "online", 00:18:14.400 "raid_level": "raid1", 00:18:14.400 "superblock": true, 00:18:14.400 "num_base_bdevs": 2, 00:18:14.400 "num_base_bdevs_discovered": 2, 00:18:14.400 "num_base_bdevs_operational": 2, 00:18:14.400 "process": { 00:18:14.400 "type": "rebuild", 00:18:14.400 "target": "spare", 00:18:14.400 "progress": { 00:18:14.400 "blocks": 2560, 00:18:14.400 "percent": 32 00:18:14.400 } 00:18:14.400 }, 00:18:14.400 "base_bdevs_list": [ 00:18:14.400 { 00:18:14.400 "name": "spare", 00:18:14.400 "uuid": "2d5134e3-2264-5a72-89a5-f21fd1acdf2c", 00:18:14.400 "is_configured": true, 00:18:14.400 "data_offset": 256, 00:18:14.400 "data_size": 7936 00:18:14.400 }, 00:18:14.400 { 00:18:14.400 "name": "BaseBdev2", 00:18:14.400 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:14.400 "is_configured": true, 00:18:14.400 "data_offset": 256, 00:18:14.400 "data_size": 7936 00:18:14.400 } 00:18:14.400 ] 00:18:14.400 }' 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:14.400 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:14.660 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:14.660 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:14.660 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.660 05:56:21 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.660 [2024-12-12 05:56:21.952166] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.660 [2024-12-12 05:56:22.009068] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:14.660 [2024-12-12 05:56:22.009120] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.660 [2024-12-12 05:56:22.009136] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:14.660 [2024-12-12 05:56:22.009142] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:14.660 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.661 "name": "raid_bdev1", 00:18:14.661 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:14.661 "strip_size_kb": 0, 00:18:14.661 "state": "online", 00:18:14.661 "raid_level": "raid1", 00:18:14.661 "superblock": true, 00:18:14.661 "num_base_bdevs": 2, 00:18:14.661 "num_base_bdevs_discovered": 1, 00:18:14.661 "num_base_bdevs_operational": 1, 00:18:14.661 "base_bdevs_list": [ 00:18:14.661 { 00:18:14.661 "name": null, 00:18:14.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.661 "is_configured": false, 00:18:14.661 "data_offset": 0, 00:18:14.661 "data_size": 7936 00:18:14.661 }, 00:18:14.661 { 00:18:14.661 "name": "BaseBdev2", 00:18:14.661 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:14.661 "is_configured": true, 00:18:14.661 "data_offset": 256, 00:18:14.661 "data_size": 7936 00:18:14.661 } 00:18:14.661 ] 00:18:14.661 }' 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.661 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:15.231 "name": "raid_bdev1", 00:18:15.231 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:15.231 "strip_size_kb": 0, 00:18:15.231 "state": "online", 00:18:15.231 "raid_level": "raid1", 00:18:15.231 "superblock": true, 00:18:15.231 "num_base_bdevs": 2, 00:18:15.231 "num_base_bdevs_discovered": 1, 00:18:15.231 "num_base_bdevs_operational": 1, 00:18:15.231 "base_bdevs_list": [ 00:18:15.231 { 00:18:15.231 "name": null, 00:18:15.231 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.231 "is_configured": false, 00:18:15.231 "data_offset": 0, 00:18:15.231 "data_size": 7936 00:18:15.231 }, 00:18:15.231 { 00:18:15.231 "name": "BaseBdev2", 00:18:15.231 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:15.231 "is_configured": true, 00:18:15.231 "data_offset": 256, 00:18:15.231 "data_size": 7936 00:18:15.231 } 00:18:15.231 ] 00:18:15.231 }' 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:15.231 [2024-12-12 05:56:22.591279] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:15.231 [2024-12-12 05:56:22.591329] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.231 [2024-12-12 05:56:22.591350] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:15.231 [2024-12-12 05:56:22.591359] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.231 [2024-12-12 05:56:22.591567] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.231 [2024-12-12 05:56:22.591582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:15.231 [2024-12-12 05:56:22.591627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:15.231 [2024-12-12 05:56:22.591639] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.231 [2024-12-12 05:56:22.591651] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.231 [2024-12-12 05:56:22.591661] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:15.231 BaseBdev1 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.231 05:56:22 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.171 "name": "raid_bdev1", 00:18:16.171 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:16.171 "strip_size_kb": 0, 00:18:16.171 "state": "online", 00:18:16.171 "raid_level": "raid1", 00:18:16.171 "superblock": true, 00:18:16.171 "num_base_bdevs": 2, 00:18:16.171 "num_base_bdevs_discovered": 1, 00:18:16.171 "num_base_bdevs_operational": 1, 00:18:16.171 "base_bdevs_list": [ 00:18:16.171 { 00:18:16.171 "name": null, 00:18:16.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.171 "is_configured": false, 00:18:16.171 "data_offset": 0, 00:18:16.171 "data_size": 7936 00:18:16.171 }, 00:18:16.171 { 00:18:16.171 "name": "BaseBdev2", 00:18:16.171 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:16.171 "is_configured": true, 00:18:16.171 "data_offset": 256, 00:18:16.171 "data_size": 7936 00:18:16.171 } 00:18:16.171 ] 00:18:16.171 }' 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.171 05:56:23 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:16.742 "name": "raid_bdev1", 00:18:16.742 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:16.742 "strip_size_kb": 0, 00:18:16.742 "state": "online", 00:18:16.742 "raid_level": "raid1", 00:18:16.742 "superblock": true, 00:18:16.742 "num_base_bdevs": 2, 00:18:16.742 "num_base_bdevs_discovered": 1, 00:18:16.742 "num_base_bdevs_operational": 1, 00:18:16.742 "base_bdevs_list": [ 00:18:16.742 { 00:18:16.742 "name": null, 00:18:16.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.742 "is_configured": false, 00:18:16.742 "data_offset": 0, 00:18:16.742 "data_size": 7936 00:18:16.742 }, 00:18:16.742 { 00:18:16.742 "name": "BaseBdev2", 00:18:16.742 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:16.742 "is_configured": true, 00:18:16.742 "data_offset": 256, 00:18:16.742 "data_size": 7936 00:18:16.742 } 00:18:16.742 ] 00:18:16.742 }' 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:16.742 [2024-12-12 05:56:24.184611] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:16.742 [2024-12-12 05:56:24.184727] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:16.742 [2024-12-12 05:56:24.184742] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:16.742 request: 00:18:16.742 { 00:18:16.742 "base_bdev": "BaseBdev1", 00:18:16.742 "raid_bdev": "raid_bdev1", 00:18:16.742 "method": "bdev_raid_add_base_bdev", 00:18:16.742 "req_id": 1 00:18:16.742 } 00:18:16.742 Got JSON-RPC error response 00:18:16.742 response: 00:18:16.742 { 00:18:16.742 "code": -22, 00:18:16.742 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:16.742 } 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:16.742 05:56:24 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.682 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.942 "name": "raid_bdev1", 00:18:17.942 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:17.942 "strip_size_kb": 0, 00:18:17.942 "state": "online", 00:18:17.942 "raid_level": "raid1", 00:18:17.942 "superblock": true, 00:18:17.942 "num_base_bdevs": 2, 00:18:17.942 "num_base_bdevs_discovered": 1, 00:18:17.942 "num_base_bdevs_operational": 1, 00:18:17.942 "base_bdevs_list": [ 00:18:17.942 { 00:18:17.942 "name": null, 00:18:17.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.942 "is_configured": false, 00:18:17.942 "data_offset": 0, 00:18:17.942 "data_size": 7936 00:18:17.942 }, 00:18:17.942 { 00:18:17.942 "name": "BaseBdev2", 00:18:17.942 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:17.942 "is_configured": true, 00:18:17.942 "data_offset": 256, 00:18:17.942 "data_size": 7936 00:18:17.942 } 00:18:17.942 ] 00:18:17.942 }' 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.942 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:18.202 "name": "raid_bdev1", 00:18:18.202 "uuid": "8f7612bc-bf8c-4cc6-9afd-3561dc19fc3e", 00:18:18.202 "strip_size_kb": 0, 00:18:18.202 "state": "online", 00:18:18.202 "raid_level": "raid1", 00:18:18.202 "superblock": true, 00:18:18.202 "num_base_bdevs": 2, 00:18:18.202 "num_base_bdevs_discovered": 1, 00:18:18.202 "num_base_bdevs_operational": 1, 00:18:18.202 "base_bdevs_list": [ 00:18:18.202 { 00:18:18.202 "name": null, 00:18:18.202 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.202 "is_configured": false, 00:18:18.202 "data_offset": 0, 00:18:18.202 "data_size": 7936 00:18:18.202 }, 00:18:18.202 { 00:18:18.202 "name": "BaseBdev2", 00:18:18.202 "uuid": "d0dd9446-4d5e-5edc-b57b-2559235f5183", 00:18:18.202 "is_configured": true, 00:18:18.202 "data_offset": 256, 00:18:18.202 "data_size": 7936 00:18:18.202 } 00:18:18.202 ] 00:18:18.202 }' 00:18:18.202 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 87039 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87039 ']' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87039 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87039 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:18.462 killing process with pid 87039 00:18:18.462 Received shutdown signal, test time was about 60.000000 seconds 00:18:18.462 00:18:18.462 Latency(us) 00:18:18.462 [2024-12-12T05:56:25.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.462 [2024-12-12T05:56:25.984Z] =================================================================================================================== 00:18:18.462 [2024-12-12T05:56:25.984Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87039' 00:18:18.462 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87039 00:18:18.462 [2024-12-12 05:56:25.856588] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:18.462 [2024-12-12 05:56:25.856679] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:18.463 [2024-12-12 05:56:25.856720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:18.463 [2024-12-12 05:56:25.856730] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:18.463 05:56:25 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87039 00:18:18.723 [2024-12-12 05:56:26.153391] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:20.107 05:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:18:20.107 00:18:20.107 real 0m19.709s 00:18:20.107 user 0m25.806s 00:18:20.107 sys 0m2.648s 00:18:20.107 05:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:20.107 ************************************ 00:18:20.107 END TEST raid_rebuild_test_sb_md_separate 00:18:20.107 ************************************ 00:18:20.107 05:56:27 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:20.107 05:56:27 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:18:20.107 05:56:27 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:18:20.107 05:56:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:20.107 05:56:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:20.107 05:56:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:20.107 ************************************ 00:18:20.107 START TEST raid_state_function_test_sb_md_interleaved 00:18:20.107 ************************************ 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=87611 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87611' 00:18:20.107 Process raid pid: 87611 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 87611 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 87611 ']' 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:20.107 05:56:27 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.107 [2024-12-12 05:56:27.380164] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:20.107 [2024-12-12 05:56:27.380386] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:20.107 [2024-12-12 05:56:27.560004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.368 [2024-12-12 05:56:27.663774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.368 [2024-12-12 05:56:27.839685] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.368 [2024-12-12 05:56:27.839718] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 [2024-12-12 05:56:28.191092] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:20.938 [2024-12-12 05:56:28.191144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:20.938 [2024-12-12 05:56:28.191155] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:20.938 [2024-12-12 05:56:28.191164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.938 "name": "Existed_Raid", 00:18:20.938 "uuid": "85b4de11-63d6-4150-98f1-045122243d0e", 00:18:20.938 "strip_size_kb": 0, 00:18:20.938 "state": "configuring", 00:18:20.938 "raid_level": "raid1", 00:18:20.938 "superblock": true, 00:18:20.938 "num_base_bdevs": 2, 00:18:20.938 "num_base_bdevs_discovered": 0, 00:18:20.938 "num_base_bdevs_operational": 2, 00:18:20.938 "base_bdevs_list": [ 00:18:20.938 { 00:18:20.938 "name": "BaseBdev1", 00:18:20.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.938 "is_configured": false, 00:18:20.938 "data_offset": 0, 00:18:20.938 "data_size": 0 00:18:20.938 }, 00:18:20.938 { 00:18:20.938 "name": "BaseBdev2", 00:18:20.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:20.938 "is_configured": false, 00:18:20.938 "data_offset": 0, 00:18:20.938 "data_size": 0 00:18:20.938 } 00:18:20.938 ] 00:18:20.938 }' 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.938 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.198 [2024-12-12 05:56:28.670257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:21.198 [2024-12-12 05:56:28.670330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.198 [2024-12-12 05:56:28.678251] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:21.198 [2024-12-12 05:56:28.678327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:21.198 [2024-12-12 05:56:28.678387] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:21.198 [2024-12-12 05:56:28.678426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.198 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.459 [2024-12-12 05:56:28.722993] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:21.459 BaseBdev1 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.459 [ 00:18:21.459 { 00:18:21.459 "name": "BaseBdev1", 00:18:21.459 "aliases": [ 00:18:21.459 "78d4e42d-5365-4e3b-9034-2afd00179728" 00:18:21.459 ], 00:18:21.459 "product_name": "Malloc disk", 00:18:21.459 "block_size": 4128, 00:18:21.459 "num_blocks": 8192, 00:18:21.459 "uuid": "78d4e42d-5365-4e3b-9034-2afd00179728", 00:18:21.459 "md_size": 32, 00:18:21.459 "md_interleave": true, 00:18:21.459 "dif_type": 0, 00:18:21.459 "assigned_rate_limits": { 00:18:21.459 "rw_ios_per_sec": 0, 00:18:21.459 "rw_mbytes_per_sec": 0, 00:18:21.459 "r_mbytes_per_sec": 0, 00:18:21.459 "w_mbytes_per_sec": 0 00:18:21.459 }, 00:18:21.459 "claimed": true, 00:18:21.459 "claim_type": "exclusive_write", 00:18:21.459 "zoned": false, 00:18:21.459 "supported_io_types": { 00:18:21.459 "read": true, 00:18:21.459 "write": true, 00:18:21.459 "unmap": true, 00:18:21.459 "flush": true, 00:18:21.459 "reset": true, 00:18:21.459 "nvme_admin": false, 00:18:21.459 "nvme_io": false, 00:18:21.459 "nvme_io_md": false, 00:18:21.459 "write_zeroes": true, 00:18:21.459 "zcopy": true, 00:18:21.459 "get_zone_info": false, 00:18:21.459 "zone_management": false, 00:18:21.459 "zone_append": false, 00:18:21.459 "compare": false, 00:18:21.459 "compare_and_write": false, 00:18:21.459 "abort": true, 00:18:21.459 "seek_hole": false, 00:18:21.459 "seek_data": false, 00:18:21.459 "copy": true, 00:18:21.459 "nvme_iov_md": false 00:18:21.459 }, 00:18:21.459 "memory_domains": [ 00:18:21.459 { 00:18:21.459 "dma_device_id": "system", 00:18:21.459 "dma_device_type": 1 00:18:21.459 }, 00:18:21.459 { 00:18:21.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:21.459 "dma_device_type": 2 00:18:21.459 } 00:18:21.459 ], 00:18:21.459 "driver_specific": {} 00:18:21.459 } 00:18:21.459 ] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:21.459 "name": "Existed_Raid", 00:18:21.459 "uuid": "3dcb9247-af8c-4a57-8cfa-7a26a21040cd", 00:18:21.459 "strip_size_kb": 0, 00:18:21.459 "state": "configuring", 00:18:21.459 "raid_level": "raid1", 00:18:21.459 "superblock": true, 00:18:21.459 "num_base_bdevs": 2, 00:18:21.459 "num_base_bdevs_discovered": 1, 00:18:21.459 "num_base_bdevs_operational": 2, 00:18:21.459 "base_bdevs_list": [ 00:18:21.459 { 00:18:21.459 "name": "BaseBdev1", 00:18:21.459 "uuid": "78d4e42d-5365-4e3b-9034-2afd00179728", 00:18:21.459 "is_configured": true, 00:18:21.459 "data_offset": 256, 00:18:21.459 "data_size": 7936 00:18:21.459 }, 00:18:21.459 { 00:18:21.459 "name": "BaseBdev2", 00:18:21.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:21.459 "is_configured": false, 00:18:21.459 "data_offset": 0, 00:18:21.459 "data_size": 0 00:18:21.459 } 00:18:21.459 ] 00:18:21.459 }' 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:21.459 05:56:28 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.030 [2024-12-12 05:56:29.254321] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:22.030 [2024-12-12 05:56:29.254417] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.030 [2024-12-12 05:56:29.266356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:22.030 [2024-12-12 05:56:29.268176] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.030 [2024-12-12 05:56:29.268217] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.030 "name": "Existed_Raid", 00:18:22.030 "uuid": "3f2244c0-62e3-4baa-bbbc-887ce7f2da34", 00:18:22.030 "strip_size_kb": 0, 00:18:22.030 "state": "configuring", 00:18:22.030 "raid_level": "raid1", 00:18:22.030 "superblock": true, 00:18:22.030 "num_base_bdevs": 2, 00:18:22.030 "num_base_bdevs_discovered": 1, 00:18:22.030 "num_base_bdevs_operational": 2, 00:18:22.030 "base_bdevs_list": [ 00:18:22.030 { 00:18:22.030 "name": "BaseBdev1", 00:18:22.030 "uuid": "78d4e42d-5365-4e3b-9034-2afd00179728", 00:18:22.030 "is_configured": true, 00:18:22.030 "data_offset": 256, 00:18:22.030 "data_size": 7936 00:18:22.030 }, 00:18:22.030 { 00:18:22.030 "name": "BaseBdev2", 00:18:22.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.030 "is_configured": false, 00:18:22.030 "data_offset": 0, 00:18:22.030 "data_size": 0 00:18:22.030 } 00:18:22.030 ] 00:18:22.030 }' 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.030 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.291 [2024-12-12 05:56:29.780946] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:22.291 [2024-12-12 05:56:29.781240] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:22.291 [2024-12-12 05:56:29.781290] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:22.291 [2024-12-12 05:56:29.781407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:22.291 [2024-12-12 05:56:29.781536] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:22.291 [2024-12-12 05:56:29.781586] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:22.291 [2024-12-12 05:56:29.781705] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.291 BaseBdev2 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.291 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.291 [ 00:18:22.291 { 00:18:22.291 "name": "BaseBdev2", 00:18:22.291 "aliases": [ 00:18:22.551 "d3891718-f3f6-45b4-b6b8-111443dd8689" 00:18:22.551 ], 00:18:22.551 "product_name": "Malloc disk", 00:18:22.551 "block_size": 4128, 00:18:22.551 "num_blocks": 8192, 00:18:22.551 "uuid": "d3891718-f3f6-45b4-b6b8-111443dd8689", 00:18:22.551 "md_size": 32, 00:18:22.551 "md_interleave": true, 00:18:22.551 "dif_type": 0, 00:18:22.551 "assigned_rate_limits": { 00:18:22.551 "rw_ios_per_sec": 0, 00:18:22.551 "rw_mbytes_per_sec": 0, 00:18:22.551 "r_mbytes_per_sec": 0, 00:18:22.551 "w_mbytes_per_sec": 0 00:18:22.551 }, 00:18:22.551 "claimed": true, 00:18:22.551 "claim_type": "exclusive_write", 00:18:22.551 "zoned": false, 00:18:22.551 "supported_io_types": { 00:18:22.551 "read": true, 00:18:22.551 "write": true, 00:18:22.551 "unmap": true, 00:18:22.551 "flush": true, 00:18:22.551 "reset": true, 00:18:22.551 "nvme_admin": false, 00:18:22.551 "nvme_io": false, 00:18:22.551 "nvme_io_md": false, 00:18:22.551 "write_zeroes": true, 00:18:22.551 "zcopy": true, 00:18:22.551 "get_zone_info": false, 00:18:22.551 "zone_management": false, 00:18:22.551 "zone_append": false, 00:18:22.551 "compare": false, 00:18:22.551 "compare_and_write": false, 00:18:22.551 "abort": true, 00:18:22.551 "seek_hole": false, 00:18:22.551 "seek_data": false, 00:18:22.551 "copy": true, 00:18:22.551 "nvme_iov_md": false 00:18:22.551 }, 00:18:22.551 "memory_domains": [ 00:18:22.551 { 00:18:22.551 "dma_device_id": "system", 00:18:22.551 "dma_device_type": 1 00:18:22.551 }, 00:18:22.551 { 00:18:22.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.551 "dma_device_type": 2 00:18:22.551 } 00:18:22.551 ], 00:18:22.551 "driver_specific": {} 00:18:22.551 } 00:18:22.551 ] 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.551 "name": "Existed_Raid", 00:18:22.551 "uuid": "3f2244c0-62e3-4baa-bbbc-887ce7f2da34", 00:18:22.551 "strip_size_kb": 0, 00:18:22.551 "state": "online", 00:18:22.551 "raid_level": "raid1", 00:18:22.551 "superblock": true, 00:18:22.551 "num_base_bdevs": 2, 00:18:22.551 "num_base_bdevs_discovered": 2, 00:18:22.551 "num_base_bdevs_operational": 2, 00:18:22.551 "base_bdevs_list": [ 00:18:22.551 { 00:18:22.551 "name": "BaseBdev1", 00:18:22.551 "uuid": "78d4e42d-5365-4e3b-9034-2afd00179728", 00:18:22.551 "is_configured": true, 00:18:22.551 "data_offset": 256, 00:18:22.551 "data_size": 7936 00:18:22.551 }, 00:18:22.551 { 00:18:22.551 "name": "BaseBdev2", 00:18:22.551 "uuid": "d3891718-f3f6-45b4-b6b8-111443dd8689", 00:18:22.551 "is_configured": true, 00:18:22.551 "data_offset": 256, 00:18:22.551 "data_size": 7936 00:18:22.551 } 00:18:22.551 ] 00:18:22.551 }' 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.551 05:56:29 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:22.812 [2024-12-12 05:56:30.276401] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.812 "name": "Existed_Raid", 00:18:22.812 "aliases": [ 00:18:22.812 "3f2244c0-62e3-4baa-bbbc-887ce7f2da34" 00:18:22.812 ], 00:18:22.812 "product_name": "Raid Volume", 00:18:22.812 "block_size": 4128, 00:18:22.812 "num_blocks": 7936, 00:18:22.812 "uuid": "3f2244c0-62e3-4baa-bbbc-887ce7f2da34", 00:18:22.812 "md_size": 32, 00:18:22.812 "md_interleave": true, 00:18:22.812 "dif_type": 0, 00:18:22.812 "assigned_rate_limits": { 00:18:22.812 "rw_ios_per_sec": 0, 00:18:22.812 "rw_mbytes_per_sec": 0, 00:18:22.812 "r_mbytes_per_sec": 0, 00:18:22.812 "w_mbytes_per_sec": 0 00:18:22.812 }, 00:18:22.812 "claimed": false, 00:18:22.812 "zoned": false, 00:18:22.812 "supported_io_types": { 00:18:22.812 "read": true, 00:18:22.812 "write": true, 00:18:22.812 "unmap": false, 00:18:22.812 "flush": false, 00:18:22.812 "reset": true, 00:18:22.812 "nvme_admin": false, 00:18:22.812 "nvme_io": false, 00:18:22.812 "nvme_io_md": false, 00:18:22.812 "write_zeroes": true, 00:18:22.812 "zcopy": false, 00:18:22.812 "get_zone_info": false, 00:18:22.812 "zone_management": false, 00:18:22.812 "zone_append": false, 00:18:22.812 "compare": false, 00:18:22.812 "compare_and_write": false, 00:18:22.812 "abort": false, 00:18:22.812 "seek_hole": false, 00:18:22.812 "seek_data": false, 00:18:22.812 "copy": false, 00:18:22.812 "nvme_iov_md": false 00:18:22.812 }, 00:18:22.812 "memory_domains": [ 00:18:22.812 { 00:18:22.812 "dma_device_id": "system", 00:18:22.812 "dma_device_type": 1 00:18:22.812 }, 00:18:22.812 { 00:18:22.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.812 "dma_device_type": 2 00:18:22.812 }, 00:18:22.812 { 00:18:22.812 "dma_device_id": "system", 00:18:22.812 "dma_device_type": 1 00:18:22.812 }, 00:18:22.812 { 00:18:22.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:22.812 "dma_device_type": 2 00:18:22.812 } 00:18:22.812 ], 00:18:22.812 "driver_specific": { 00:18:22.812 "raid": { 00:18:22.812 "uuid": "3f2244c0-62e3-4baa-bbbc-887ce7f2da34", 00:18:22.812 "strip_size_kb": 0, 00:18:22.812 "state": "online", 00:18:22.812 "raid_level": "raid1", 00:18:22.812 "superblock": true, 00:18:22.812 "num_base_bdevs": 2, 00:18:22.812 "num_base_bdevs_discovered": 2, 00:18:22.812 "num_base_bdevs_operational": 2, 00:18:22.812 "base_bdevs_list": [ 00:18:22.812 { 00:18:22.812 "name": "BaseBdev1", 00:18:22.812 "uuid": "78d4e42d-5365-4e3b-9034-2afd00179728", 00:18:22.812 "is_configured": true, 00:18:22.812 "data_offset": 256, 00:18:22.812 "data_size": 7936 00:18:22.812 }, 00:18:22.812 { 00:18:22.812 "name": "BaseBdev2", 00:18:22.812 "uuid": "d3891718-f3f6-45b4-b6b8-111443dd8689", 00:18:22.812 "is_configured": true, 00:18:22.812 "data_offset": 256, 00:18:22.812 "data_size": 7936 00:18:22.812 } 00:18:22.812 ] 00:18:22.812 } 00:18:22.812 } 00:18:22.812 }' 00:18:22.812 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:23.072 BaseBdev2' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.072 [2024-12-12 05:56:30.483838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.072 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.073 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.333 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.333 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.333 "name": "Existed_Raid", 00:18:23.333 "uuid": "3f2244c0-62e3-4baa-bbbc-887ce7f2da34", 00:18:23.333 "strip_size_kb": 0, 00:18:23.333 "state": "online", 00:18:23.333 "raid_level": "raid1", 00:18:23.333 "superblock": true, 00:18:23.333 "num_base_bdevs": 2, 00:18:23.333 "num_base_bdevs_discovered": 1, 00:18:23.333 "num_base_bdevs_operational": 1, 00:18:23.333 "base_bdevs_list": [ 00:18:23.333 { 00:18:23.333 "name": null, 00:18:23.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.333 "is_configured": false, 00:18:23.333 "data_offset": 0, 00:18:23.333 "data_size": 7936 00:18:23.333 }, 00:18:23.333 { 00:18:23.333 "name": "BaseBdev2", 00:18:23.333 "uuid": "d3891718-f3f6-45b4-b6b8-111443dd8689", 00:18:23.333 "is_configured": true, 00:18:23.333 "data_offset": 256, 00:18:23.333 "data_size": 7936 00:18:23.333 } 00:18:23.333 ] 00:18:23.333 }' 00:18:23.333 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.333 05:56:30 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.593 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.853 [2024-12-12 05:56:31.120071] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:23.853 [2024-12-12 05:56:31.120175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:23.853 [2024-12-12 05:56:31.209479] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:23.853 [2024-12-12 05:56:31.209637] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:23.853 [2024-12-12 05:56:31.209655] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 87611 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 87611 ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 87611 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87611 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.853 killing process with pid 87611 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87611' 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 87611 00:18:23.853 [2024-12-12 05:56:31.304994] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:23.853 05:56:31 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 87611 00:18:23.853 [2024-12-12 05:56:31.321480] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:25.235 05:56:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:18:25.235 00:18:25.235 real 0m5.098s 00:18:25.235 user 0m7.418s 00:18:25.235 sys 0m0.909s 00:18:25.235 05:56:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:25.235 ************************************ 00:18:25.235 END TEST raid_state_function_test_sb_md_interleaved 00:18:25.235 ************************************ 00:18:25.235 05:56:32 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.235 05:56:32 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:18:25.235 05:56:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:25.235 05:56:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:25.235 05:56:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:25.235 ************************************ 00:18:25.236 START TEST raid_superblock_test_md_interleaved 00:18:25.236 ************************************ 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=87834 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 87834 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 87834 ']' 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.236 05:56:32 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:25.236 [2024-12-12 05:56:32.546185] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:25.236 [2024-12-12 05:56:32.546377] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87834 ] 00:18:25.236 [2024-12-12 05:56:32.722074] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.496 [2024-12-12 05:56:32.825311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.496 [2024-12-12 05:56:33.001188] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:25.496 [2024-12-12 05:56:33.001224] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:26.066 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 malloc1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 [2024-12-12 05:56:33.395968] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:26.067 [2024-12-12 05:56:33.396077] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.067 [2024-12-12 05:56:33.396116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:26.067 [2024-12-12 05:56:33.396144] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.067 [2024-12-12 05:56:33.397956] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.067 [2024-12-12 05:56:33.398049] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:26.067 pt1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 malloc2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 [2024-12-12 05:56:33.452111] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:26.067 [2024-12-12 05:56:33.452164] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:26.067 [2024-12-12 05:56:33.452181] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:26.067 [2024-12-12 05:56:33.452189] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:26.067 [2024-12-12 05:56:33.453952] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:26.067 [2024-12-12 05:56:33.453988] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:26.067 pt2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 [2024-12-12 05:56:33.464120] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:26.067 [2024-12-12 05:56:33.465844] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:26.067 [2024-12-12 05:56:33.466017] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:26.067 [2024-12-12 05:56:33.466030] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:26.067 [2024-12-12 05:56:33.466101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:26.067 [2024-12-12 05:56:33.466179] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:26.067 [2024-12-12 05:56:33.466190] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:26.067 [2024-12-12 05:56:33.466263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.067 "name": "raid_bdev1", 00:18:26.067 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:26.067 "strip_size_kb": 0, 00:18:26.067 "state": "online", 00:18:26.067 "raid_level": "raid1", 00:18:26.067 "superblock": true, 00:18:26.067 "num_base_bdevs": 2, 00:18:26.067 "num_base_bdevs_discovered": 2, 00:18:26.067 "num_base_bdevs_operational": 2, 00:18:26.067 "base_bdevs_list": [ 00:18:26.067 { 00:18:26.067 "name": "pt1", 00:18:26.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.067 "is_configured": true, 00:18:26.067 "data_offset": 256, 00:18:26.067 "data_size": 7936 00:18:26.067 }, 00:18:26.067 { 00:18:26.067 "name": "pt2", 00:18:26.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.067 "is_configured": true, 00:18:26.067 "data_offset": 256, 00:18:26.067 "data_size": 7936 00:18:26.067 } 00:18:26.067 ] 00:18:26.067 }' 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.067 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.638 [2024-12-12 05:56:33.955588] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.638 "name": "raid_bdev1", 00:18:26.638 "aliases": [ 00:18:26.638 "c47303cd-ad57-42fd-afb7-e8a47faadeff" 00:18:26.638 ], 00:18:26.638 "product_name": "Raid Volume", 00:18:26.638 "block_size": 4128, 00:18:26.638 "num_blocks": 7936, 00:18:26.638 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:26.638 "md_size": 32, 00:18:26.638 "md_interleave": true, 00:18:26.638 "dif_type": 0, 00:18:26.638 "assigned_rate_limits": { 00:18:26.638 "rw_ios_per_sec": 0, 00:18:26.638 "rw_mbytes_per_sec": 0, 00:18:26.638 "r_mbytes_per_sec": 0, 00:18:26.638 "w_mbytes_per_sec": 0 00:18:26.638 }, 00:18:26.638 "claimed": false, 00:18:26.638 "zoned": false, 00:18:26.638 "supported_io_types": { 00:18:26.638 "read": true, 00:18:26.638 "write": true, 00:18:26.638 "unmap": false, 00:18:26.638 "flush": false, 00:18:26.638 "reset": true, 00:18:26.638 "nvme_admin": false, 00:18:26.638 "nvme_io": false, 00:18:26.638 "nvme_io_md": false, 00:18:26.638 "write_zeroes": true, 00:18:26.638 "zcopy": false, 00:18:26.638 "get_zone_info": false, 00:18:26.638 "zone_management": false, 00:18:26.638 "zone_append": false, 00:18:26.638 "compare": false, 00:18:26.638 "compare_and_write": false, 00:18:26.638 "abort": false, 00:18:26.638 "seek_hole": false, 00:18:26.638 "seek_data": false, 00:18:26.638 "copy": false, 00:18:26.638 "nvme_iov_md": false 00:18:26.638 }, 00:18:26.638 "memory_domains": [ 00:18:26.638 { 00:18:26.638 "dma_device_id": "system", 00:18:26.638 "dma_device_type": 1 00:18:26.638 }, 00:18:26.638 { 00:18:26.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.638 "dma_device_type": 2 00:18:26.638 }, 00:18:26.638 { 00:18:26.638 "dma_device_id": "system", 00:18:26.638 "dma_device_type": 1 00:18:26.638 }, 00:18:26.638 { 00:18:26.638 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:26.638 "dma_device_type": 2 00:18:26.638 } 00:18:26.638 ], 00:18:26.638 "driver_specific": { 00:18:26.638 "raid": { 00:18:26.638 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:26.638 "strip_size_kb": 0, 00:18:26.638 "state": "online", 00:18:26.638 "raid_level": "raid1", 00:18:26.638 "superblock": true, 00:18:26.638 "num_base_bdevs": 2, 00:18:26.638 "num_base_bdevs_discovered": 2, 00:18:26.638 "num_base_bdevs_operational": 2, 00:18:26.638 "base_bdevs_list": [ 00:18:26.638 { 00:18:26.638 "name": "pt1", 00:18:26.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:26.638 "is_configured": true, 00:18:26.638 "data_offset": 256, 00:18:26.638 "data_size": 7936 00:18:26.638 }, 00:18:26.638 { 00:18:26.638 "name": "pt2", 00:18:26.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:26.638 "is_configured": true, 00:18:26.638 "data_offset": 256, 00:18:26.638 "data_size": 7936 00:18:26.638 } 00:18:26.638 ] 00:18:26.638 } 00:18:26.638 } 00:18:26.638 }' 00:18:26.638 05:56:33 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:26.638 pt2' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.638 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:26.899 [2024-12-12 05:56:34.199156] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c47303cd-ad57-42fd-afb7-e8a47faadeff 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z c47303cd-ad57-42fd-afb7-e8a47faadeff ']' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 [2024-12-12 05:56:34.246822] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.899 [2024-12-12 05:56:34.246844] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:26.899 [2024-12-12 05:56:34.246909] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:26.899 [2024-12-12 05:56:34.246955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:26.899 [2024-12-12 05:56:34.246965] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 [2024-12-12 05:56:34.386629] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:26.899 [2024-12-12 05:56:34.388515] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:26.899 [2024-12-12 05:56:34.388620] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:26.899 [2024-12-12 05:56:34.388665] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:26.899 [2024-12-12 05:56:34.388679] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:26.899 [2024-12-12 05:56:34.388688] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:26.899 request: 00:18:26.899 { 00:18:26.899 "name": "raid_bdev1", 00:18:26.899 "raid_level": "raid1", 00:18:26.899 "base_bdevs": [ 00:18:26.899 "malloc1", 00:18:26.899 "malloc2" 00:18:26.899 ], 00:18:26.899 "superblock": false, 00:18:26.899 "method": "bdev_raid_create", 00:18:26.899 "req_id": 1 00:18:26.899 } 00:18:26.899 Got JSON-RPC error response 00:18:26.899 response: 00:18:26.899 { 00:18:26.899 "code": -17, 00:18:26.899 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:26.899 } 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.899 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.160 [2024-12-12 05:56:34.450512] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:27.160 [2024-12-12 05:56:34.450612] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.160 [2024-12-12 05:56:34.450642] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:27.160 [2024-12-12 05:56:34.450680] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.160 [2024-12-12 05:56:34.452480] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.160 [2024-12-12 05:56:34.452582] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:27.160 [2024-12-12 05:56:34.452645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:27.160 [2024-12-12 05:56:34.452710] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:27.160 pt1 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.160 "name": "raid_bdev1", 00:18:27.160 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:27.160 "strip_size_kb": 0, 00:18:27.160 "state": "configuring", 00:18:27.160 "raid_level": "raid1", 00:18:27.160 "superblock": true, 00:18:27.160 "num_base_bdevs": 2, 00:18:27.160 "num_base_bdevs_discovered": 1, 00:18:27.160 "num_base_bdevs_operational": 2, 00:18:27.160 "base_bdevs_list": [ 00:18:27.160 { 00:18:27.160 "name": "pt1", 00:18:27.160 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.160 "is_configured": true, 00:18:27.160 "data_offset": 256, 00:18:27.160 "data_size": 7936 00:18:27.160 }, 00:18:27.160 { 00:18:27.160 "name": null, 00:18:27.160 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.160 "is_configured": false, 00:18:27.160 "data_offset": 256, 00:18:27.160 "data_size": 7936 00:18:27.160 } 00:18:27.160 ] 00:18:27.160 }' 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.160 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.420 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.420 [2024-12-12 05:56:34.873770] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:27.420 [2024-12-12 05:56:34.873826] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:27.420 [2024-12-12 05:56:34.873842] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:27.420 [2024-12-12 05:56:34.873851] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:27.420 [2024-12-12 05:56:34.873962] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:27.420 [2024-12-12 05:56:34.873975] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:27.420 [2024-12-12 05:56:34.874009] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:27.420 [2024-12-12 05:56:34.874025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:27.420 [2024-12-12 05:56:34.874093] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:27.420 [2024-12-12 05:56:34.874103] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:27.420 [2024-12-12 05:56:34.874162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:27.421 [2024-12-12 05:56:34.874219] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:27.421 [2024-12-12 05:56:34.874226] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:27.421 [2024-12-12 05:56:34.874275] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.421 pt2 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:27.421 "name": "raid_bdev1", 00:18:27.421 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:27.421 "strip_size_kb": 0, 00:18:27.421 "state": "online", 00:18:27.421 "raid_level": "raid1", 00:18:27.421 "superblock": true, 00:18:27.421 "num_base_bdevs": 2, 00:18:27.421 "num_base_bdevs_discovered": 2, 00:18:27.421 "num_base_bdevs_operational": 2, 00:18:27.421 "base_bdevs_list": [ 00:18:27.421 { 00:18:27.421 "name": "pt1", 00:18:27.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.421 "is_configured": true, 00:18:27.421 "data_offset": 256, 00:18:27.421 "data_size": 7936 00:18:27.421 }, 00:18:27.421 { 00:18:27.421 "name": "pt2", 00:18:27.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.421 "is_configured": true, 00:18:27.421 "data_offset": 256, 00:18:27.421 "data_size": 7936 00:18:27.421 } 00:18:27.421 ] 00:18:27.421 }' 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:27.421 05:56:34 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:27.990 [2024-12-12 05:56:35.333337] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:27.990 "name": "raid_bdev1", 00:18:27.990 "aliases": [ 00:18:27.990 "c47303cd-ad57-42fd-afb7-e8a47faadeff" 00:18:27.990 ], 00:18:27.990 "product_name": "Raid Volume", 00:18:27.990 "block_size": 4128, 00:18:27.990 "num_blocks": 7936, 00:18:27.990 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:27.990 "md_size": 32, 00:18:27.990 "md_interleave": true, 00:18:27.990 "dif_type": 0, 00:18:27.990 "assigned_rate_limits": { 00:18:27.990 "rw_ios_per_sec": 0, 00:18:27.990 "rw_mbytes_per_sec": 0, 00:18:27.990 "r_mbytes_per_sec": 0, 00:18:27.990 "w_mbytes_per_sec": 0 00:18:27.990 }, 00:18:27.990 "claimed": false, 00:18:27.990 "zoned": false, 00:18:27.990 "supported_io_types": { 00:18:27.990 "read": true, 00:18:27.990 "write": true, 00:18:27.990 "unmap": false, 00:18:27.990 "flush": false, 00:18:27.990 "reset": true, 00:18:27.990 "nvme_admin": false, 00:18:27.990 "nvme_io": false, 00:18:27.990 "nvme_io_md": false, 00:18:27.990 "write_zeroes": true, 00:18:27.990 "zcopy": false, 00:18:27.990 "get_zone_info": false, 00:18:27.990 "zone_management": false, 00:18:27.990 "zone_append": false, 00:18:27.990 "compare": false, 00:18:27.990 "compare_and_write": false, 00:18:27.990 "abort": false, 00:18:27.990 "seek_hole": false, 00:18:27.990 "seek_data": false, 00:18:27.990 "copy": false, 00:18:27.990 "nvme_iov_md": false 00:18:27.990 }, 00:18:27.990 "memory_domains": [ 00:18:27.990 { 00:18:27.990 "dma_device_id": "system", 00:18:27.990 "dma_device_type": 1 00:18:27.990 }, 00:18:27.990 { 00:18:27.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.990 "dma_device_type": 2 00:18:27.990 }, 00:18:27.990 { 00:18:27.990 "dma_device_id": "system", 00:18:27.990 "dma_device_type": 1 00:18:27.990 }, 00:18:27.990 { 00:18:27.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:27.990 "dma_device_type": 2 00:18:27.990 } 00:18:27.990 ], 00:18:27.990 "driver_specific": { 00:18:27.990 "raid": { 00:18:27.990 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:27.990 "strip_size_kb": 0, 00:18:27.990 "state": "online", 00:18:27.990 "raid_level": "raid1", 00:18:27.990 "superblock": true, 00:18:27.990 "num_base_bdevs": 2, 00:18:27.990 "num_base_bdevs_discovered": 2, 00:18:27.990 "num_base_bdevs_operational": 2, 00:18:27.990 "base_bdevs_list": [ 00:18:27.990 { 00:18:27.990 "name": "pt1", 00:18:27.990 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:27.990 "is_configured": true, 00:18:27.990 "data_offset": 256, 00:18:27.990 "data_size": 7936 00:18:27.990 }, 00:18:27.990 { 00:18:27.990 "name": "pt2", 00:18:27.990 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:27.990 "is_configured": true, 00:18:27.990 "data_offset": 256, 00:18:27.990 "data_size": 7936 00:18:27.990 } 00:18:27.990 ] 00:18:27.990 } 00:18:27.990 } 00:18:27.990 }' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:27.990 pt2' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.990 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 [2024-12-12 05:56:35.584912] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' c47303cd-ad57-42fd-afb7-e8a47faadeff '!=' c47303cd-ad57-42fd-afb7-e8a47faadeff ']' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 [2024-12-12 05:56:35.632636] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.250 "name": "raid_bdev1", 00:18:28.250 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:28.250 "strip_size_kb": 0, 00:18:28.250 "state": "online", 00:18:28.250 "raid_level": "raid1", 00:18:28.250 "superblock": true, 00:18:28.250 "num_base_bdevs": 2, 00:18:28.250 "num_base_bdevs_discovered": 1, 00:18:28.250 "num_base_bdevs_operational": 1, 00:18:28.250 "base_bdevs_list": [ 00:18:28.250 { 00:18:28.250 "name": null, 00:18:28.250 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.250 "is_configured": false, 00:18:28.250 "data_offset": 0, 00:18:28.250 "data_size": 7936 00:18:28.250 }, 00:18:28.250 { 00:18:28.250 "name": "pt2", 00:18:28.250 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.250 "is_configured": true, 00:18:28.250 "data_offset": 256, 00:18:28.250 "data_size": 7936 00:18:28.250 } 00:18:28.250 ] 00:18:28.250 }' 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.250 05:56:35 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 [2024-12-12 05:56:36.071834] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.827 [2024-12-12 05:56:36.071897] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.827 [2024-12-12 05:56:36.071993] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.827 [2024-12-12 05:56:36.072057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.827 [2024-12-12 05:56:36.072126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 [2024-12-12 05:56:36.127751] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:28.827 [2024-12-12 05:56:36.127839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:28.827 [2024-12-12 05:56:36.127868] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:28.827 [2024-12-12 05:56:36.127895] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:28.827 [2024-12-12 05:56:36.129715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:28.827 [2024-12-12 05:56:36.129786] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:28.827 [2024-12-12 05:56:36.129845] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:28.827 [2024-12-12 05:56:36.129922] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:28.827 [2024-12-12 05:56:36.130003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:28.827 [2024-12-12 05:56:36.130046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:28.827 [2024-12-12 05:56:36.130165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:28.827 [2024-12-12 05:56:36.130266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:28.827 [2024-12-12 05:56:36.130301] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:28.827 [2024-12-12 05:56:36.130398] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:28.827 pt2 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.827 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.827 "name": "raid_bdev1", 00:18:28.827 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:28.827 "strip_size_kb": 0, 00:18:28.827 "state": "online", 00:18:28.827 "raid_level": "raid1", 00:18:28.827 "superblock": true, 00:18:28.827 "num_base_bdevs": 2, 00:18:28.827 "num_base_bdevs_discovered": 1, 00:18:28.828 "num_base_bdevs_operational": 1, 00:18:28.828 "base_bdevs_list": [ 00:18:28.828 { 00:18:28.828 "name": null, 00:18:28.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.828 "is_configured": false, 00:18:28.828 "data_offset": 256, 00:18:28.828 "data_size": 7936 00:18:28.828 }, 00:18:28.828 { 00:18:28.828 "name": "pt2", 00:18:28.828 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:28.828 "is_configured": true, 00:18:28.828 "data_offset": 256, 00:18:28.828 "data_size": 7936 00:18:28.828 } 00:18:28.828 ] 00:18:28.828 }' 00:18:28.828 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.828 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.103 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:29.103 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.103 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.103 [2024-12-12 05:56:36.610901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.103 [2024-12-12 05:56:36.610968] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:29.103 [2024-12-12 05:56:36.611048] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.103 [2024-12-12 05:56:36.611102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.103 [2024-12-12 05:56:36.611161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:29.104 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.104 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.380 [2024-12-12 05:56:36.670832] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:29.380 [2024-12-12 05:56:36.670881] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.380 [2024-12-12 05:56:36.670898] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:29.380 [2024-12-12 05:56:36.670906] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.380 [2024-12-12 05:56:36.672729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.380 [2024-12-12 05:56:36.672803] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:29.380 [2024-12-12 05:56:36.672861] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:29.380 [2024-12-12 05:56:36.672900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:29.380 [2024-12-12 05:56:36.672985] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:29.380 [2024-12-12 05:56:36.672994] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:29.380 [2024-12-12 05:56:36.673008] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:29.380 [2024-12-12 05:56:36.673076] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:29.380 [2024-12-12 05:56:36.673132] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:29.380 [2024-12-12 05:56:36.673139] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:29.380 [2024-12-12 05:56:36.673202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:29.380 [2024-12-12 05:56:36.673256] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:29.380 [2024-12-12 05:56:36.673264] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:29.380 [2024-12-12 05:56:36.673323] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.380 pt1 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.380 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.381 "name": "raid_bdev1", 00:18:29.381 "uuid": "c47303cd-ad57-42fd-afb7-e8a47faadeff", 00:18:29.381 "strip_size_kb": 0, 00:18:29.381 "state": "online", 00:18:29.381 "raid_level": "raid1", 00:18:29.381 "superblock": true, 00:18:29.381 "num_base_bdevs": 2, 00:18:29.381 "num_base_bdevs_discovered": 1, 00:18:29.381 "num_base_bdevs_operational": 1, 00:18:29.381 "base_bdevs_list": [ 00:18:29.381 { 00:18:29.381 "name": null, 00:18:29.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.381 "is_configured": false, 00:18:29.381 "data_offset": 256, 00:18:29.381 "data_size": 7936 00:18:29.381 }, 00:18:29.381 { 00:18:29.381 "name": "pt2", 00:18:29.381 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:29.381 "is_configured": true, 00:18:29.381 "data_offset": 256, 00:18:29.381 "data_size": 7936 00:18:29.381 } 00:18:29.381 ] 00:18:29.381 }' 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.381 05:56:36 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.641 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:29.641 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:29.641 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.641 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.641 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.901 [2024-12-12 05:56:37.190166] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' c47303cd-ad57-42fd-afb7-e8a47faadeff '!=' c47303cd-ad57-42fd-afb7-e8a47faadeff ']' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 87834 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 87834 ']' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 87834 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87834 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:29.901 killing process with pid 87834 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87834' 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 87834 00:18:29.901 [2024-12-12 05:56:37.277987] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:29.901 [2024-12-12 05:56:37.278045] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:29.901 [2024-12-12 05:56:37.278079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:29.901 [2024-12-12 05:56:37.278092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:29.901 05:56:37 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 87834 00:18:30.161 [2024-12-12 05:56:37.469156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:31.101 05:56:38 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:18:31.101 00:18:31.101 real 0m6.071s 00:18:31.101 user 0m9.280s 00:18:31.101 sys 0m1.102s 00:18:31.101 05:56:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.101 ************************************ 00:18:31.101 END TEST raid_superblock_test_md_interleaved 00:18:31.101 ************************************ 00:18:31.101 05:56:38 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.101 05:56:38 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:31.101 05:56:38 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:31.101 05:56:38 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.101 05:56:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:31.101 ************************************ 00:18:31.101 START TEST raid_rebuild_test_sb_md_interleaved 00:18:31.101 ************************************ 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:31.101 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=88121 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 88121 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88121 ']' 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.102 05:56:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:31.362 [2024-12-12 05:56:38.702828] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:31.362 [2024-12-12 05:56:38.703015] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:31.362 Zero copy mechanism will not be used. 00:18:31.362 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88121 ] 00:18:31.362 [2024-12-12 05:56:38.875724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.622 [2024-12-12 05:56:38.982012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.882 [2024-12-12 05:56:39.153417] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:31.882 [2024-12-12 05:56:39.153562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.142 BaseBdev1_malloc 00:18:32.142 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.143 [2024-12-12 05:56:39.558466] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:32.143 [2024-12-12 05:56:39.558597] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.143 [2024-12-12 05:56:39.558623] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:32.143 [2024-12-12 05:56:39.558634] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.143 [2024-12-12 05:56:39.560401] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.143 [2024-12-12 05:56:39.560442] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:32.143 BaseBdev1 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.143 BaseBdev2_malloc 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.143 [2024-12-12 05:56:39.611600] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:32.143 [2024-12-12 05:56:39.611654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.143 [2024-12-12 05:56:39.611674] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:32.143 [2024-12-12 05:56:39.611686] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.143 [2024-12-12 05:56:39.613435] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.143 [2024-12-12 05:56:39.613473] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:32.143 BaseBdev2 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.143 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.402 spare_malloc 00:18:32.402 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.402 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:32.402 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.402 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.402 spare_delay 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.403 [2024-12-12 05:56:39.712363] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.403 [2024-12-12 05:56:39.712418] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.403 [2024-12-12 05:56:39.712436] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:32.403 [2024-12-12 05:56:39.712447] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.403 [2024-12-12 05:56:39.714258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.403 [2024-12-12 05:56:39.714296] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.403 spare 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.403 [2024-12-12 05:56:39.724385] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:32.403 [2024-12-12 05:56:39.726175] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:32.403 [2024-12-12 05:56:39.726462] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:32.403 [2024-12-12 05:56:39.726495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:32.403 [2024-12-12 05:56:39.726574] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:32.403 [2024-12-12 05:56:39.726647] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:32.403 [2024-12-12 05:56:39.726655] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:32.403 [2024-12-12 05:56:39.726732] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.403 "name": "raid_bdev1", 00:18:32.403 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:32.403 "strip_size_kb": 0, 00:18:32.403 "state": "online", 00:18:32.403 "raid_level": "raid1", 00:18:32.403 "superblock": true, 00:18:32.403 "num_base_bdevs": 2, 00:18:32.403 "num_base_bdevs_discovered": 2, 00:18:32.403 "num_base_bdevs_operational": 2, 00:18:32.403 "base_bdevs_list": [ 00:18:32.403 { 00:18:32.403 "name": "BaseBdev1", 00:18:32.403 "uuid": "1097d4b8-c071-5c6f-88e1-c01cd4044559", 00:18:32.403 "is_configured": true, 00:18:32.403 "data_offset": 256, 00:18:32.403 "data_size": 7936 00:18:32.403 }, 00:18:32.403 { 00:18:32.403 "name": "BaseBdev2", 00:18:32.403 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:32.403 "is_configured": true, 00:18:32.403 "data_offset": 256, 00:18:32.403 "data_size": 7936 00:18:32.403 } 00:18:32.403 ] 00:18:32.403 }' 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.403 05:56:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.663 [2024-12-12 05:56:40.139910] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.663 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.922 [2024-12-12 05:56:40.239487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.922 "name": "raid_bdev1", 00:18:32.922 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:32.922 "strip_size_kb": 0, 00:18:32.922 "state": "online", 00:18:32.922 "raid_level": "raid1", 00:18:32.922 "superblock": true, 00:18:32.922 "num_base_bdevs": 2, 00:18:32.922 "num_base_bdevs_discovered": 1, 00:18:32.922 "num_base_bdevs_operational": 1, 00:18:32.922 "base_bdevs_list": [ 00:18:32.922 { 00:18:32.922 "name": null, 00:18:32.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.922 "is_configured": false, 00:18:32.922 "data_offset": 0, 00:18:32.922 "data_size": 7936 00:18:32.922 }, 00:18:32.922 { 00:18:32.922 "name": "BaseBdev2", 00:18:32.922 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:32.922 "is_configured": true, 00:18:32.922 "data_offset": 256, 00:18:32.922 "data_size": 7936 00:18:32.922 } 00:18:32.922 ] 00:18:32.922 }' 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.922 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.492 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:33.492 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.492 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:33.492 [2024-12-12 05:56:40.734764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:33.492 [2024-12-12 05:56:40.752352] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:33.492 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.492 05:56:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:33.492 [2024-12-12 05:56:40.754152] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.432 "name": "raid_bdev1", 00:18:34.432 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:34.432 "strip_size_kb": 0, 00:18:34.432 "state": "online", 00:18:34.432 "raid_level": "raid1", 00:18:34.432 "superblock": true, 00:18:34.432 "num_base_bdevs": 2, 00:18:34.432 "num_base_bdevs_discovered": 2, 00:18:34.432 "num_base_bdevs_operational": 2, 00:18:34.432 "process": { 00:18:34.432 "type": "rebuild", 00:18:34.432 "target": "spare", 00:18:34.432 "progress": { 00:18:34.432 "blocks": 2560, 00:18:34.432 "percent": 32 00:18:34.432 } 00:18:34.432 }, 00:18:34.432 "base_bdevs_list": [ 00:18:34.432 { 00:18:34.432 "name": "spare", 00:18:34.432 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:34.432 "is_configured": true, 00:18:34.432 "data_offset": 256, 00:18:34.432 "data_size": 7936 00:18:34.432 }, 00:18:34.432 { 00:18:34.432 "name": "BaseBdev2", 00:18:34.432 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:34.432 "is_configured": true, 00:18:34.432 "data_offset": 256, 00:18:34.432 "data_size": 7936 00:18:34.432 } 00:18:34.432 ] 00:18:34.432 }' 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.432 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.432 [2024-12-12 05:56:41.918397] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.693 [2024-12-12 05:56:41.958866] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:34.693 [2024-12-12 05:56:41.958953] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:34.693 [2024-12-12 05:56:41.958967] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:34.693 [2024-12-12 05:56:41.958979] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.693 05:56:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:34.693 "name": "raid_bdev1", 00:18:34.693 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:34.693 "strip_size_kb": 0, 00:18:34.693 "state": "online", 00:18:34.693 "raid_level": "raid1", 00:18:34.693 "superblock": true, 00:18:34.693 "num_base_bdevs": 2, 00:18:34.693 "num_base_bdevs_discovered": 1, 00:18:34.693 "num_base_bdevs_operational": 1, 00:18:34.693 "base_bdevs_list": [ 00:18:34.693 { 00:18:34.693 "name": null, 00:18:34.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.693 "is_configured": false, 00:18:34.693 "data_offset": 0, 00:18:34.693 "data_size": 7936 00:18:34.693 }, 00:18:34.693 { 00:18:34.693 "name": "BaseBdev2", 00:18:34.693 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:34.693 "is_configured": true, 00:18:34.693 "data_offset": 256, 00:18:34.693 "data_size": 7936 00:18:34.693 } 00:18:34.693 ] 00:18:34.693 }' 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:34.693 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.954 "name": "raid_bdev1", 00:18:34.954 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:34.954 "strip_size_kb": 0, 00:18:34.954 "state": "online", 00:18:34.954 "raid_level": "raid1", 00:18:34.954 "superblock": true, 00:18:34.954 "num_base_bdevs": 2, 00:18:34.954 "num_base_bdevs_discovered": 1, 00:18:34.954 "num_base_bdevs_operational": 1, 00:18:34.954 "base_bdevs_list": [ 00:18:34.954 { 00:18:34.954 "name": null, 00:18:34.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.954 "is_configured": false, 00:18:34.954 "data_offset": 0, 00:18:34.954 "data_size": 7936 00:18:34.954 }, 00:18:34.954 { 00:18:34.954 "name": "BaseBdev2", 00:18:34.954 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:34.954 "is_configured": true, 00:18:34.954 "data_offset": 256, 00:18:34.954 "data_size": 7936 00:18:34.954 } 00:18:34.954 ] 00:18:34.954 }' 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.954 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:35.214 [2024-12-12 05:56:42.503711] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:35.214 [2024-12-12 05:56:42.519367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.214 05:56:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:35.214 [2024-12-12 05:56:42.521202] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.154 "name": "raid_bdev1", 00:18:36.154 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:36.154 "strip_size_kb": 0, 00:18:36.154 "state": "online", 00:18:36.154 "raid_level": "raid1", 00:18:36.154 "superblock": true, 00:18:36.154 "num_base_bdevs": 2, 00:18:36.154 "num_base_bdevs_discovered": 2, 00:18:36.154 "num_base_bdevs_operational": 2, 00:18:36.154 "process": { 00:18:36.154 "type": "rebuild", 00:18:36.154 "target": "spare", 00:18:36.154 "progress": { 00:18:36.154 "blocks": 2560, 00:18:36.154 "percent": 32 00:18:36.154 } 00:18:36.154 }, 00:18:36.154 "base_bdevs_list": [ 00:18:36.154 { 00:18:36.154 "name": "spare", 00:18:36.154 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:36.154 "is_configured": true, 00:18:36.154 "data_offset": 256, 00:18:36.154 "data_size": 7936 00:18:36.154 }, 00:18:36.154 { 00:18:36.154 "name": "BaseBdev2", 00:18:36.154 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:36.154 "is_configured": true, 00:18:36.154 "data_offset": 256, 00:18:36.154 "data_size": 7936 00:18:36.154 } 00:18:36.154 ] 00:18:36.154 }' 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.154 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:36.414 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=717 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.414 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:36.414 "name": "raid_bdev1", 00:18:36.414 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:36.414 "strip_size_kb": 0, 00:18:36.414 "state": "online", 00:18:36.414 "raid_level": "raid1", 00:18:36.414 "superblock": true, 00:18:36.414 "num_base_bdevs": 2, 00:18:36.414 "num_base_bdevs_discovered": 2, 00:18:36.414 "num_base_bdevs_operational": 2, 00:18:36.414 "process": { 00:18:36.414 "type": "rebuild", 00:18:36.414 "target": "spare", 00:18:36.414 "progress": { 00:18:36.414 "blocks": 2816, 00:18:36.414 "percent": 35 00:18:36.414 } 00:18:36.414 }, 00:18:36.414 "base_bdevs_list": [ 00:18:36.414 { 00:18:36.414 "name": "spare", 00:18:36.414 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:36.414 "is_configured": true, 00:18:36.414 "data_offset": 256, 00:18:36.414 "data_size": 7936 00:18:36.414 }, 00:18:36.414 { 00:18:36.414 "name": "BaseBdev2", 00:18:36.414 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:36.414 "is_configured": true, 00:18:36.414 "data_offset": 256, 00:18:36.414 "data_size": 7936 00:18:36.414 } 00:18:36.414 ] 00:18:36.414 }' 00:18:36.415 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:36.415 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:36.415 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:36.415 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:36.415 05:56:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.354 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.354 "name": "raid_bdev1", 00:18:37.354 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:37.354 "strip_size_kb": 0, 00:18:37.354 "state": "online", 00:18:37.354 "raid_level": "raid1", 00:18:37.354 "superblock": true, 00:18:37.354 "num_base_bdevs": 2, 00:18:37.354 "num_base_bdevs_discovered": 2, 00:18:37.355 "num_base_bdevs_operational": 2, 00:18:37.355 "process": { 00:18:37.355 "type": "rebuild", 00:18:37.355 "target": "spare", 00:18:37.355 "progress": { 00:18:37.355 "blocks": 5888, 00:18:37.355 "percent": 74 00:18:37.355 } 00:18:37.355 }, 00:18:37.355 "base_bdevs_list": [ 00:18:37.355 { 00:18:37.355 "name": "spare", 00:18:37.355 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:37.355 "is_configured": true, 00:18:37.355 "data_offset": 256, 00:18:37.355 "data_size": 7936 00:18:37.355 }, 00:18:37.355 { 00:18:37.355 "name": "BaseBdev2", 00:18:37.355 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:37.355 "is_configured": true, 00:18:37.355 "data_offset": 256, 00:18:37.355 "data_size": 7936 00:18:37.355 } 00:18:37.355 ] 00:18:37.355 }' 00:18:37.614 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.614 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:37.614 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.614 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:37.614 05:56:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:38.184 [2024-12-12 05:56:45.632915] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:38.184 [2024-12-12 05:56:45.632974] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:38.184 [2024-12-12 05:56:45.633066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.755 05:56:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.755 "name": "raid_bdev1", 00:18:38.755 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:38.755 "strip_size_kb": 0, 00:18:38.755 "state": "online", 00:18:38.755 "raid_level": "raid1", 00:18:38.755 "superblock": true, 00:18:38.755 "num_base_bdevs": 2, 00:18:38.755 "num_base_bdevs_discovered": 2, 00:18:38.755 "num_base_bdevs_operational": 2, 00:18:38.755 "base_bdevs_list": [ 00:18:38.755 { 00:18:38.755 "name": "spare", 00:18:38.755 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:38.755 "is_configured": true, 00:18:38.755 "data_offset": 256, 00:18:38.755 "data_size": 7936 00:18:38.755 }, 00:18:38.755 { 00:18:38.755 "name": "BaseBdev2", 00:18:38.755 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:38.755 "is_configured": true, 00:18:38.755 "data_offset": 256, 00:18:38.755 "data_size": 7936 00:18:38.755 } 00:18:38.755 ] 00:18:38.755 }' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:38.755 "name": "raid_bdev1", 00:18:38.755 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:38.755 "strip_size_kb": 0, 00:18:38.755 "state": "online", 00:18:38.755 "raid_level": "raid1", 00:18:38.755 "superblock": true, 00:18:38.755 "num_base_bdevs": 2, 00:18:38.755 "num_base_bdevs_discovered": 2, 00:18:38.755 "num_base_bdevs_operational": 2, 00:18:38.755 "base_bdevs_list": [ 00:18:38.755 { 00:18:38.755 "name": "spare", 00:18:38.755 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:38.755 "is_configured": true, 00:18:38.755 "data_offset": 256, 00:18:38.755 "data_size": 7936 00:18:38.755 }, 00:18:38.755 { 00:18:38.755 "name": "BaseBdev2", 00:18:38.755 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:38.755 "is_configured": true, 00:18:38.755 "data_offset": 256, 00:18:38.755 "data_size": 7936 00:18:38.755 } 00:18:38.755 ] 00:18:38.755 }' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:38.755 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.015 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.015 "name": "raid_bdev1", 00:18:39.015 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:39.015 "strip_size_kb": 0, 00:18:39.015 "state": "online", 00:18:39.015 "raid_level": "raid1", 00:18:39.015 "superblock": true, 00:18:39.015 "num_base_bdevs": 2, 00:18:39.015 "num_base_bdevs_discovered": 2, 00:18:39.015 "num_base_bdevs_operational": 2, 00:18:39.015 "base_bdevs_list": [ 00:18:39.015 { 00:18:39.015 "name": "spare", 00:18:39.015 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:39.015 "is_configured": true, 00:18:39.015 "data_offset": 256, 00:18:39.015 "data_size": 7936 00:18:39.015 }, 00:18:39.015 { 00:18:39.015 "name": "BaseBdev2", 00:18:39.015 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:39.015 "is_configured": true, 00:18:39.015 "data_offset": 256, 00:18:39.015 "data_size": 7936 00:18:39.015 } 00:18:39.015 ] 00:18:39.015 }' 00:18:39.015 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.015 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.275 [2024-12-12 05:56:46.743376] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:39.275 [2024-12-12 05:56:46.743406] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:39.275 [2024-12-12 05:56:46.743475] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:39.275 [2024-12-12 05:56:46.743542] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:39.275 [2024-12-12 05:56:46.743553] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.275 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.535 [2024-12-12 05:56:46.815248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:39.535 [2024-12-12 05:56:46.815297] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:39.535 [2024-12-12 05:56:46.815318] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:18:39.535 [2024-12-12 05:56:46.815327] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:39.535 [2024-12-12 05:56:46.817425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:39.535 [2024-12-12 05:56:46.817462] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:39.535 [2024-12-12 05:56:46.817525] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:39.535 [2024-12-12 05:56:46.817570] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:39.535 [2024-12-12 05:56:46.817673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:39.535 spare 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.535 [2024-12-12 05:56:46.917558] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:39.535 [2024-12-12 05:56:46.917625] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:39.535 [2024-12-12 05:56:46.917746] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:39.535 [2024-12-12 05:56:46.917886] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:39.535 [2024-12-12 05:56:46.917930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:39.535 [2024-12-12 05:56:46.918070] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.535 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.535 "name": "raid_bdev1", 00:18:39.535 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:39.535 "strip_size_kb": 0, 00:18:39.535 "state": "online", 00:18:39.535 "raid_level": "raid1", 00:18:39.535 "superblock": true, 00:18:39.535 "num_base_bdevs": 2, 00:18:39.535 "num_base_bdevs_discovered": 2, 00:18:39.535 "num_base_bdevs_operational": 2, 00:18:39.535 "base_bdevs_list": [ 00:18:39.535 { 00:18:39.535 "name": "spare", 00:18:39.535 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:39.536 "is_configured": true, 00:18:39.536 "data_offset": 256, 00:18:39.536 "data_size": 7936 00:18:39.536 }, 00:18:39.536 { 00:18:39.536 "name": "BaseBdev2", 00:18:39.536 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:39.536 "is_configured": true, 00:18:39.536 "data_offset": 256, 00:18:39.536 "data_size": 7936 00:18:39.536 } 00:18:39.536 ] 00:18:39.536 }' 00:18:39.536 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.536 05:56:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:40.106 "name": "raid_bdev1", 00:18:40.106 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:40.106 "strip_size_kb": 0, 00:18:40.106 "state": "online", 00:18:40.106 "raid_level": "raid1", 00:18:40.106 "superblock": true, 00:18:40.106 "num_base_bdevs": 2, 00:18:40.106 "num_base_bdevs_discovered": 2, 00:18:40.106 "num_base_bdevs_operational": 2, 00:18:40.106 "base_bdevs_list": [ 00:18:40.106 { 00:18:40.106 "name": "spare", 00:18:40.106 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 256, 00:18:40.106 "data_size": 7936 00:18:40.106 }, 00:18:40.106 { 00:18:40.106 "name": "BaseBdev2", 00:18:40.106 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 256, 00:18:40.106 "data_size": 7936 00:18:40.106 } 00:18:40.106 ] 00:18:40.106 }' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 [2024-12-12 05:56:47.546141] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.106 "name": "raid_bdev1", 00:18:40.106 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:40.106 "strip_size_kb": 0, 00:18:40.106 "state": "online", 00:18:40.106 "raid_level": "raid1", 00:18:40.106 "superblock": true, 00:18:40.106 "num_base_bdevs": 2, 00:18:40.106 "num_base_bdevs_discovered": 1, 00:18:40.106 "num_base_bdevs_operational": 1, 00:18:40.106 "base_bdevs_list": [ 00:18:40.106 { 00:18:40.106 "name": null, 00:18:40.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.106 "is_configured": false, 00:18:40.106 "data_offset": 0, 00:18:40.106 "data_size": 7936 00:18:40.106 }, 00:18:40.106 { 00:18:40.106 "name": "BaseBdev2", 00:18:40.106 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:40.106 "is_configured": true, 00:18:40.106 "data_offset": 256, 00:18:40.106 "data_size": 7936 00:18:40.106 } 00:18:40.106 ] 00:18:40.106 }' 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.106 05:56:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.676 05:56:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:40.676 05:56:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.676 05:56:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:40.676 [2024-12-12 05:56:48.013336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.676 [2024-12-12 05:56:48.013481] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:40.676 [2024-12-12 05:56:48.013497] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:40.676 [2024-12-12 05:56:48.013544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:40.676 [2024-12-12 05:56:48.029262] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:40.676 05:56:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.676 05:56:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:40.676 [2024-12-12 05:56:48.031070] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:41.616 "name": "raid_bdev1", 00:18:41.616 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:41.616 "strip_size_kb": 0, 00:18:41.616 "state": "online", 00:18:41.616 "raid_level": "raid1", 00:18:41.616 "superblock": true, 00:18:41.616 "num_base_bdevs": 2, 00:18:41.616 "num_base_bdevs_discovered": 2, 00:18:41.616 "num_base_bdevs_operational": 2, 00:18:41.616 "process": { 00:18:41.616 "type": "rebuild", 00:18:41.616 "target": "spare", 00:18:41.616 "progress": { 00:18:41.616 "blocks": 2560, 00:18:41.616 "percent": 32 00:18:41.616 } 00:18:41.616 }, 00:18:41.616 "base_bdevs_list": [ 00:18:41.616 { 00:18:41.616 "name": "spare", 00:18:41.616 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:41.616 "is_configured": true, 00:18:41.616 "data_offset": 256, 00:18:41.616 "data_size": 7936 00:18:41.616 }, 00:18:41.616 { 00:18:41.616 "name": "BaseBdev2", 00:18:41.616 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:41.616 "is_configured": true, 00:18:41.616 "data_offset": 256, 00:18:41.616 "data_size": 7936 00:18:41.616 } 00:18:41.616 ] 00:18:41.616 }' 00:18:41.616 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 [2024-12-12 05:56:49.190849] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.876 [2024-12-12 05:56:49.235721] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:41.876 [2024-12-12 05:56:49.235851] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.876 [2024-12-12 05:56:49.235886] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:41.876 [2024-12-12 05:56:49.235908] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.876 "name": "raid_bdev1", 00:18:41.876 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:41.876 "strip_size_kb": 0, 00:18:41.876 "state": "online", 00:18:41.876 "raid_level": "raid1", 00:18:41.876 "superblock": true, 00:18:41.876 "num_base_bdevs": 2, 00:18:41.876 "num_base_bdevs_discovered": 1, 00:18:41.876 "num_base_bdevs_operational": 1, 00:18:41.876 "base_bdevs_list": [ 00:18:41.876 { 00:18:41.876 "name": null, 00:18:41.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.876 "is_configured": false, 00:18:41.876 "data_offset": 0, 00:18:41.876 "data_size": 7936 00:18:41.876 }, 00:18:41.876 { 00:18:41.876 "name": "BaseBdev2", 00:18:41.876 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:41.876 "is_configured": true, 00:18:41.876 "data_offset": 256, 00:18:41.876 "data_size": 7936 00:18:41.876 } 00:18:41.876 ] 00:18:41.876 }' 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.876 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:42.447 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.447 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:42.447 [2024-12-12 05:56:49.740169] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:42.447 [2024-12-12 05:56:49.740288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.447 [2024-12-12 05:56:49.740333] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:42.447 [2024-12-12 05:56:49.740365] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.447 [2024-12-12 05:56:49.740599] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.447 [2024-12-12 05:56:49.740652] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:42.447 [2024-12-12 05:56:49.740737] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:42.447 [2024-12-12 05:56:49.740775] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:42.447 [2024-12-12 05:56:49.740820] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:42.447 [2024-12-12 05:56:49.740899] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:42.447 [2024-12-12 05:56:49.756108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:18:42.447 spare 00:18:42.447 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.447 05:56:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:42.447 [2024-12-12 05:56:49.757991] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.386 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:43.386 "name": "raid_bdev1", 00:18:43.387 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:43.387 "strip_size_kb": 0, 00:18:43.387 "state": "online", 00:18:43.387 "raid_level": "raid1", 00:18:43.387 "superblock": true, 00:18:43.387 "num_base_bdevs": 2, 00:18:43.387 "num_base_bdevs_discovered": 2, 00:18:43.387 "num_base_bdevs_operational": 2, 00:18:43.387 "process": { 00:18:43.387 "type": "rebuild", 00:18:43.387 "target": "spare", 00:18:43.387 "progress": { 00:18:43.387 "blocks": 2560, 00:18:43.387 "percent": 32 00:18:43.387 } 00:18:43.387 }, 00:18:43.387 "base_bdevs_list": [ 00:18:43.387 { 00:18:43.387 "name": "spare", 00:18:43.387 "uuid": "23afa468-7216-5680-bfa4-f3625d2b463c", 00:18:43.387 "is_configured": true, 00:18:43.387 "data_offset": 256, 00:18:43.387 "data_size": 7936 00:18:43.387 }, 00:18:43.387 { 00:18:43.387 "name": "BaseBdev2", 00:18:43.387 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:43.387 "is_configured": true, 00:18:43.387 "data_offset": 256, 00:18:43.387 "data_size": 7936 00:18:43.387 } 00:18:43.387 ] 00:18:43.387 }' 00:18:43.387 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:43.387 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:43.387 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:43.646 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:43.646 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:43.646 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.646 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.646 [2024-12-12 05:56:50.921682] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.646 [2024-12-12 05:56:50.962595] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:43.646 [2024-12-12 05:56:50.962716] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:43.647 [2024-12-12 05:56:50.962752] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:43.647 [2024-12-12 05:56:50.962772] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:43.647 05:56:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:43.647 "name": "raid_bdev1", 00:18:43.647 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:43.647 "strip_size_kb": 0, 00:18:43.647 "state": "online", 00:18:43.647 "raid_level": "raid1", 00:18:43.647 "superblock": true, 00:18:43.647 "num_base_bdevs": 2, 00:18:43.647 "num_base_bdevs_discovered": 1, 00:18:43.647 "num_base_bdevs_operational": 1, 00:18:43.647 "base_bdevs_list": [ 00:18:43.647 { 00:18:43.647 "name": null, 00:18:43.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:43.647 "is_configured": false, 00:18:43.647 "data_offset": 0, 00:18:43.647 "data_size": 7936 00:18:43.647 }, 00:18:43.647 { 00:18:43.647 "name": "BaseBdev2", 00:18:43.647 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:43.647 "is_configured": true, 00:18:43.647 "data_offset": 256, 00:18:43.647 "data_size": 7936 00:18:43.647 } 00:18:43.647 ] 00:18:43.647 }' 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:43.647 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.907 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:44.167 "name": "raid_bdev1", 00:18:44.167 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:44.167 "strip_size_kb": 0, 00:18:44.167 "state": "online", 00:18:44.167 "raid_level": "raid1", 00:18:44.167 "superblock": true, 00:18:44.167 "num_base_bdevs": 2, 00:18:44.167 "num_base_bdevs_discovered": 1, 00:18:44.167 "num_base_bdevs_operational": 1, 00:18:44.167 "base_bdevs_list": [ 00:18:44.167 { 00:18:44.167 "name": null, 00:18:44.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:44.167 "is_configured": false, 00:18:44.167 "data_offset": 0, 00:18:44.167 "data_size": 7936 00:18:44.167 }, 00:18:44.167 { 00:18:44.167 "name": "BaseBdev2", 00:18:44.167 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:44.167 "is_configured": true, 00:18:44.167 "data_offset": 256, 00:18:44.167 "data_size": 7936 00:18:44.167 } 00:18:44.167 ] 00:18:44.167 }' 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:44.167 [2024-12-12 05:56:51.570569] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:44.167 [2024-12-12 05:56:51.570619] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:44.167 [2024-12-12 05:56:51.570640] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:44.167 [2024-12-12 05:56:51.570649] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:44.167 [2024-12-12 05:56:51.570807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:44.167 [2024-12-12 05:56:51.570820] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:44.167 [2024-12-12 05:56:51.570864] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:44.167 [2024-12-12 05:56:51.570875] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:44.167 [2024-12-12 05:56:51.570884] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:44.167 [2024-12-12 05:56:51.570894] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:44.167 BaseBdev1 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.167 05:56:51 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.107 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.367 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.367 "name": "raid_bdev1", 00:18:45.367 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:45.367 "strip_size_kb": 0, 00:18:45.367 "state": "online", 00:18:45.367 "raid_level": "raid1", 00:18:45.367 "superblock": true, 00:18:45.367 "num_base_bdevs": 2, 00:18:45.367 "num_base_bdevs_discovered": 1, 00:18:45.367 "num_base_bdevs_operational": 1, 00:18:45.367 "base_bdevs_list": [ 00:18:45.367 { 00:18:45.367 "name": null, 00:18:45.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.367 "is_configured": false, 00:18:45.367 "data_offset": 0, 00:18:45.367 "data_size": 7936 00:18:45.367 }, 00:18:45.367 { 00:18:45.367 "name": "BaseBdev2", 00:18:45.367 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:45.367 "is_configured": true, 00:18:45.367 "data_offset": 256, 00:18:45.367 "data_size": 7936 00:18:45.367 } 00:18:45.367 ] 00:18:45.367 }' 00:18:45.367 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.367 05:56:52 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:45.627 "name": "raid_bdev1", 00:18:45.627 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:45.627 "strip_size_kb": 0, 00:18:45.627 "state": "online", 00:18:45.627 "raid_level": "raid1", 00:18:45.627 "superblock": true, 00:18:45.627 "num_base_bdevs": 2, 00:18:45.627 "num_base_bdevs_discovered": 1, 00:18:45.627 "num_base_bdevs_operational": 1, 00:18:45.627 "base_bdevs_list": [ 00:18:45.627 { 00:18:45.627 "name": null, 00:18:45.627 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:45.627 "is_configured": false, 00:18:45.627 "data_offset": 0, 00:18:45.627 "data_size": 7936 00:18:45.627 }, 00:18:45.627 { 00:18:45.627 "name": "BaseBdev2", 00:18:45.627 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:45.627 "is_configured": true, 00:18:45.627 "data_offset": 256, 00:18:45.627 "data_size": 7936 00:18:45.627 } 00:18:45.627 ] 00:18:45.627 }' 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.627 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:45.887 [2024-12-12 05:56:53.155850] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.887 [2024-12-12 05:56:53.155979] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:45.887 [2024-12-12 05:56:53.155996] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:45.887 request: 00:18:45.887 { 00:18:45.887 "base_bdev": "BaseBdev1", 00:18:45.887 "raid_bdev": "raid_bdev1", 00:18:45.887 "method": "bdev_raid_add_base_bdev", 00:18:45.887 "req_id": 1 00:18:45.887 } 00:18:45.887 Got JSON-RPC error response 00:18:45.887 response: 00:18:45.887 { 00:18:45.887 "code": -22, 00:18:45.887 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:45.887 } 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:45.887 05:56:53 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.828 "name": "raid_bdev1", 00:18:46.828 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:46.828 "strip_size_kb": 0, 00:18:46.828 "state": "online", 00:18:46.828 "raid_level": "raid1", 00:18:46.828 "superblock": true, 00:18:46.828 "num_base_bdevs": 2, 00:18:46.828 "num_base_bdevs_discovered": 1, 00:18:46.828 "num_base_bdevs_operational": 1, 00:18:46.828 "base_bdevs_list": [ 00:18:46.828 { 00:18:46.828 "name": null, 00:18:46.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:46.828 "is_configured": false, 00:18:46.828 "data_offset": 0, 00:18:46.828 "data_size": 7936 00:18:46.828 }, 00:18:46.828 { 00:18:46.828 "name": "BaseBdev2", 00:18:46.828 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:46.828 "is_configured": true, 00:18:46.828 "data_offset": 256, 00:18:46.828 "data_size": 7936 00:18:46.828 } 00:18:46.828 ] 00:18:46.828 }' 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.828 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.088 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:47.088 "name": "raid_bdev1", 00:18:47.088 "uuid": "6c90cca5-d27e-488e-90b9-a9e68c7584b8", 00:18:47.088 "strip_size_kb": 0, 00:18:47.088 "state": "online", 00:18:47.088 "raid_level": "raid1", 00:18:47.088 "superblock": true, 00:18:47.088 "num_base_bdevs": 2, 00:18:47.088 "num_base_bdevs_discovered": 1, 00:18:47.088 "num_base_bdevs_operational": 1, 00:18:47.088 "base_bdevs_list": [ 00:18:47.088 { 00:18:47.088 "name": null, 00:18:47.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.088 "is_configured": false, 00:18:47.088 "data_offset": 0, 00:18:47.088 "data_size": 7936 00:18:47.088 }, 00:18:47.088 { 00:18:47.088 "name": "BaseBdev2", 00:18:47.088 "uuid": "0737a589-8f9d-5fa0-8807-fa69afbccbde", 00:18:47.088 "is_configured": true, 00:18:47.088 "data_offset": 256, 00:18:47.088 "data_size": 7936 00:18:47.088 } 00:18:47.088 ] 00:18:47.088 }' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 88121 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88121 ']' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88121 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88121 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:47.348 killing process with pid 88121 00:18:47.348 Received shutdown signal, test time was about 60.000000 seconds 00:18:47.348 00:18:47.348 Latency(us) 00:18:47.348 [2024-12-12T05:56:54.870Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.348 [2024-12-12T05:56:54.870Z] =================================================================================================================== 00:18:47.348 [2024-12-12T05:56:54.870Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88121' 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88121 00:18:47.348 [2024-12-12 05:56:54.733586] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:47.348 [2024-12-12 05:56:54.733677] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:47.348 [2024-12-12 05:56:54.733714] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:47.348 [2024-12-12 05:56:54.733725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:47.348 05:56:54 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88121 00:18:47.608 [2024-12-12 05:56:55.016721] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:48.546 ************************************ 00:18:48.546 END TEST raid_rebuild_test_sb_md_interleaved 00:18:48.546 ************************************ 00:18:48.546 05:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:18:48.546 00:18:48.546 real 0m17.437s 00:18:48.546 user 0m22.881s 00:18:48.546 sys 0m1.710s 00:18:48.546 05:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.546 05:56:56 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:48.805 05:56:56 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:18:48.805 05:56:56 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:18:48.805 05:56:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 88121 ']' 00:18:48.805 05:56:56 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 88121 00:18:48.805 05:56:56 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:18:48.805 00:18:48.805 real 11m39.358s 00:18:48.805 user 15m52.106s 00:18:48.805 sys 2m0.414s 00:18:48.805 ************************************ 00:18:48.805 END TEST bdev_raid 00:18:48.805 ************************************ 00:18:48.805 05:56:56 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:48.805 05:56:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:48.805 05:56:56 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.805 05:56:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:48.805 05:56:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:48.805 05:56:56 -- common/autotest_common.sh@10 -- # set +x 00:18:48.805 ************************************ 00:18:48.805 START TEST spdkcli_raid 00:18:48.805 ************************************ 00:18:48.805 05:56:56 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:48.805 * Looking for test storage... 00:18:49.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.066 05:56:56 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.066 --rc genhtml_branch_coverage=1 00:18:49.066 --rc genhtml_function_coverage=1 00:18:49.066 --rc genhtml_legend=1 00:18:49.066 --rc geninfo_all_blocks=1 00:18:49.066 --rc geninfo_unexecuted_blocks=1 00:18:49.066 00:18:49.066 ' 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.066 --rc genhtml_branch_coverage=1 00:18:49.066 --rc genhtml_function_coverage=1 00:18:49.066 --rc genhtml_legend=1 00:18:49.066 --rc geninfo_all_blocks=1 00:18:49.066 --rc geninfo_unexecuted_blocks=1 00:18:49.066 00:18:49.066 ' 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.066 --rc genhtml_branch_coverage=1 00:18:49.066 --rc genhtml_function_coverage=1 00:18:49.066 --rc genhtml_legend=1 00:18:49.066 --rc geninfo_all_blocks=1 00:18:49.066 --rc geninfo_unexecuted_blocks=1 00:18:49.066 00:18:49.066 ' 00:18:49.066 05:56:56 spdkcli_raid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:49.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.066 --rc genhtml_branch_coverage=1 00:18:49.066 --rc genhtml_function_coverage=1 00:18:49.066 --rc genhtml_legend=1 00:18:49.066 --rc geninfo_all_blocks=1 00:18:49.066 --rc geninfo_unexecuted_blocks=1 00:18:49.066 00:18:49.066 ' 00:18:49.066 05:56:56 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:49.066 05:56:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:49.066 05:56:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:49.066 05:56:56 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:18:49.066 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:18:49.067 05:56:56 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=88697 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:18:49.067 05:56:56 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 88697 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 88697 ']' 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.067 05:56:56 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:49.067 [2024-12-12 05:56:56.579155] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:49.067 [2024-12-12 05:56:56.579342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88697 ] 00:18:49.327 [2024-12-12 05:56:56.756967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:49.586 [2024-12-12 05:56:56.862370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.586 [2024-12-12 05:56:56.862409] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:50.156 05:56:57 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.156 05:56:57 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:18:50.156 05:56:57 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:18:50.156 05:56:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:50.156 05:56:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.415 05:56:57 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:18:50.416 05:56:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.416 05:56:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:50.416 05:56:57 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:18:50.416 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:18:50.416 ' 00:18:51.798 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:18:51.798 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:18:52.058 05:56:59 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:18:52.058 05:56:59 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:52.058 05:56:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.058 05:56:59 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:18:52.058 05:56:59 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.058 05:56:59 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:52.058 05:56:59 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:18:52.058 ' 00:18:52.998 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:18:53.261 05:57:00 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:18:53.261 05:57:00 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.261 05:57:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.261 05:57:00 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:18:53.261 05:57:00 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.261 05:57:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.261 05:57:00 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:18:53.261 05:57:00 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:18:53.856 05:57:01 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:18:53.856 05:57:01 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:18:53.856 05:57:01 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:18:53.856 05:57:01 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.856 05:57:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.856 05:57:01 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:18:53.856 05:57:01 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:53.856 05:57:01 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:53.856 05:57:01 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:18:53.856 ' 00:18:54.805 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:18:54.805 05:57:02 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:18:54.805 05:57:02 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:54.805 05:57:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.064 05:57:02 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:18:55.064 05:57:02 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:55.064 05:57:02 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:55.064 05:57:02 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:18:55.064 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:18:55.064 ' 00:18:56.443 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:18:56.443 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:18:56.443 05:57:03 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:56.443 05:57:03 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 88697 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 88697 ']' 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 88697 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88697 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88697' 00:18:56.443 killing process with pid 88697 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 88697 00:18:56.443 05:57:03 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 88697 00:18:58.983 05:57:06 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:18:58.983 05:57:06 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 88697 ']' 00:18:58.983 05:57:06 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 88697 00:18:58.983 05:57:06 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 88697 ']' 00:18:58.983 05:57:06 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 88697 00:18:58.983 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (88697) - No such process 00:18:58.983 05:57:06 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 88697 is not found' 00:18:58.983 Process with pid 88697 is not found 00:18:58.983 05:57:06 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:18:58.984 05:57:06 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:18:58.984 05:57:06 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:18:58.984 05:57:06 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:18:58.984 ************************************ 00:18:58.984 END TEST spdkcli_raid 00:18:58.984 ************************************ 00:18:58.984 00:18:58.984 real 0m10.179s 00:18:58.984 user 0m20.896s 00:18:58.984 sys 0m1.188s 00:18:58.984 05:57:06 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:58.984 05:57:06 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:18:58.984 05:57:06 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:58.984 05:57:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:58.984 05:57:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:58.984 05:57:06 -- common/autotest_common.sh@10 -- # set +x 00:18:58.984 ************************************ 00:18:58.984 START TEST blockdev_raid5f 00:18:58.984 ************************************ 00:18:58.984 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:18:59.244 * Looking for test storage... 00:18:59.244 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:59.244 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:59.244 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lcov --version 00:18:59.244 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:59.244 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:18:59.244 05:57:06 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:59.245 05:57:06 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.245 --rc genhtml_branch_coverage=1 00:18:59.245 --rc genhtml_function_coverage=1 00:18:59.245 --rc genhtml_legend=1 00:18:59.245 --rc geninfo_all_blocks=1 00:18:59.245 --rc geninfo_unexecuted_blocks=1 00:18:59.245 00:18:59.245 ' 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.245 --rc genhtml_branch_coverage=1 00:18:59.245 --rc genhtml_function_coverage=1 00:18:59.245 --rc genhtml_legend=1 00:18:59.245 --rc geninfo_all_blocks=1 00:18:59.245 --rc geninfo_unexecuted_blocks=1 00:18:59.245 00:18:59.245 ' 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.245 --rc genhtml_branch_coverage=1 00:18:59.245 --rc genhtml_function_coverage=1 00:18:59.245 --rc genhtml_legend=1 00:18:59.245 --rc geninfo_all_blocks=1 00:18:59.245 --rc geninfo_unexecuted_blocks=1 00:18:59.245 00:18:59.245 ' 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:59.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:59.245 --rc genhtml_branch_coverage=1 00:18:59.245 --rc genhtml_function_coverage=1 00:18:59.245 --rc genhtml_legend=1 00:18:59.245 --rc geninfo_all_blocks=1 00:18:59.245 --rc geninfo_unexecuted_blocks=1 00:18:59.245 00:18:59.245 ' 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=88917 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:59.245 05:57:06 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 88917 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 88917 ']' 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:59.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:59.245 05:57:06 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:18:59.505 [2024-12-12 05:57:06.807007] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:18:59.505 [2024-12-12 05:57:06.807222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88917 ] 00:18:59.505 [2024-12-12 05:57:06.981454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.765 [2024-12-12 05:57:07.113918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.721 05:57:08 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:00.721 05:57:08 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:19:00.721 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:19:00.721 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:19:00.721 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:00.721 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.721 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.721 Malloc0 00:19:00.721 Malloc1 00:19:00.981 Malloc2 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50ed711e-7a73-4733-af73-e9cc999d8022"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50ed711e-7a73-4733-af73-e9cc999d8022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50ed711e-7a73-4733-af73-e9cc999d8022",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "10b36f16-d8d1-41db-8ff0-6824c12e95e6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6b942dc3-01de-4242-a54e-f7b89dcce187",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bb511c4e-663f-4148-8b3d-8856879c1fd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:19:00.981 05:57:08 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 88917 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 88917 ']' 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 88917 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88917 00:19:00.981 killing process with pid 88917 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88917' 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 88917 00:19:00.981 05:57:08 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 88917 00:19:04.279 05:57:11 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:04.279 05:57:11 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:04.279 05:57:11 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:04.279 05:57:11 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.279 05:57:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:04.279 ************************************ 00:19:04.279 START TEST bdev_hello_world 00:19:04.279 ************************************ 00:19:04.279 05:57:11 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:04.279 [2024-12-12 05:57:11.382507] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:04.279 [2024-12-12 05:57:11.382616] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88960 ] 00:19:04.279 [2024-12-12 05:57:11.559117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.279 [2024-12-12 05:57:11.695702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.848 [2024-12-12 05:57:12.304291] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:04.848 [2024-12-12 05:57:12.304421] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:04.848 [2024-12-12 05:57:12.304442] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:04.848 [2024-12-12 05:57:12.304960] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:04.848 [2024-12-12 05:57:12.305114] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:04.848 [2024-12-12 05:57:12.305131] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:04.848 [2024-12-12 05:57:12.305176] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:04.848 00:19:04.848 [2024-12-12 05:57:12.305193] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:06.758 00:19:06.758 real 0m2.467s 00:19:06.758 user 0m2.008s 00:19:06.758 sys 0m0.336s 00:19:06.758 05:57:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.758 05:57:13 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:06.758 ************************************ 00:19:06.758 END TEST bdev_hello_world 00:19:06.758 ************************************ 00:19:06.758 05:57:13 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:19:06.758 05:57:13 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:06.758 05:57:13 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.758 05:57:13 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:06.758 ************************************ 00:19:06.758 START TEST bdev_bounds 00:19:06.758 ************************************ 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=88990 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 88990' 00:19:06.758 Process bdevio pid: 88990 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 88990 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 88990 ']' 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.758 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.759 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.759 05:57:13 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:06.759 [2024-12-12 05:57:13.924278] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:06.759 [2024-12-12 05:57:13.924464] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88990 ] 00:19:06.759 [2024-12-12 05:57:14.098135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:06.759 [2024-12-12 05:57:14.233624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.759 [2024-12-12 05:57:14.233776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.759 [2024-12-12 05:57:14.233807] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.698 05:57:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.698 05:57:14 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:19:07.698 05:57:14 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:07.698 I/O targets: 00:19:07.698 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:07.698 00:19:07.698 00:19:07.698 CUnit - A unit testing framework for C - Version 2.1-3 00:19:07.698 http://cunit.sourceforge.net/ 00:19:07.698 00:19:07.698 00:19:07.698 Suite: bdevio tests on: raid5f 00:19:07.698 Test: blockdev write read block ...passed 00:19:07.698 Test: blockdev write zeroes read block ...passed 00:19:07.698 Test: blockdev write zeroes read no split ...passed 00:19:07.698 Test: blockdev write zeroes read split ...passed 00:19:07.698 Test: blockdev write zeroes read split partial ...passed 00:19:07.698 Test: blockdev reset ...passed 00:19:07.698 Test: blockdev write read 8 blocks ...passed 00:19:07.698 Test: blockdev write read size > 128k ...passed 00:19:07.698 Test: blockdev write read invalid size ...passed 00:19:07.698 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:07.698 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:07.698 Test: blockdev write read max offset ...passed 00:19:07.698 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:07.698 Test: blockdev writev readv 8 blocks ...passed 00:19:07.698 Test: blockdev writev readv 30 x 1block ...passed 00:19:07.698 Test: blockdev writev readv block ...passed 00:19:07.698 Test: blockdev writev readv size > 128k ...passed 00:19:07.698 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:07.698 Test: blockdev comparev and writev ...passed 00:19:07.698 Test: blockdev nvme passthru rw ...passed 00:19:07.698 Test: blockdev nvme passthru vendor specific ...passed 00:19:07.698 Test: blockdev nvme admin passthru ...passed 00:19:07.698 Test: blockdev copy ...passed 00:19:07.699 00:19:07.699 Run Summary: Type Total Ran Passed Failed Inactive 00:19:07.699 suites 1 1 n/a 0 0 00:19:07.699 tests 23 23 23 0 0 00:19:07.699 asserts 130 130 130 0 n/a 00:19:07.699 00:19:07.699 Elapsed time = 0.627 seconds 00:19:07.699 0 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 88990 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 88990 ']' 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 88990 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88990 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88990' 00:19:07.958 killing process with pid 88990 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 88990 00:19:07.958 05:57:15 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 88990 00:19:09.340 05:57:16 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:19:09.340 00:19:09.340 real 0m2.921s 00:19:09.340 user 0m7.165s 00:19:09.340 sys 0m0.467s 00:19:09.340 05:57:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.340 05:57:16 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:09.340 ************************************ 00:19:09.340 END TEST bdev_bounds 00:19:09.340 ************************************ 00:19:09.340 05:57:16 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:09.340 05:57:16 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:09.340 05:57:16 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.340 05:57:16 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:09.340 ************************************ 00:19:09.340 START TEST bdev_nbd 00:19:09.340 ************************************ 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=89037 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 89037 /var/tmp/spdk-nbd.sock 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 89037 ']' 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:19:09.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.340 05:57:16 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:09.601 [2024-12-12 05:57:16.938457] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:09.601 [2024-12-12 05:57:16.938673] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.601 [2024-12-12 05:57:17.116732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.860 [2024-12-12 05:57:17.250831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:10.430 05:57:17 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:10.690 1+0 records in 00:19:10.690 1+0 records out 00:19:10.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000631429 s, 6.5 MB/s 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:19:10.690 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:19:10.950 { 00:19:10.950 "nbd_device": "/dev/nbd0", 00:19:10.950 "bdev_name": "raid5f" 00:19:10.950 } 00:19:10.950 ]' 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:19:10.950 { 00:19:10.950 "nbd_device": "/dev/nbd0", 00:19:10.950 "bdev_name": "raid5f" 00:19:10.950 } 00:19:10.950 ]' 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:10.950 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.210 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.471 05:57:18 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:19:11.732 /dev/nbd0 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:11.732 1+0 records in 00:19:11.732 1+0 records out 00:19:11.732 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000618696 s, 6.6 MB/s 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:11.732 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:19:11.992 { 00:19:11.992 "nbd_device": "/dev/nbd0", 00:19:11.992 "bdev_name": "raid5f" 00:19:11.992 } 00:19:11.992 ]' 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:19:11.992 { 00:19:11.992 "nbd_device": "/dev/nbd0", 00:19:11.992 "bdev_name": "raid5f" 00:19:11.992 } 00:19:11.992 ]' 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:19:11.992 256+0 records in 00:19:11.992 256+0 records out 00:19:11.992 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141365 s, 74.2 MB/s 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:19:11.992 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:19:12.253 256+0 records in 00:19:12.253 256+0 records out 00:19:12.253 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0287017 s, 36.5 MB/s 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:12.253 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:19:12.513 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:19:12.513 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:19:12.513 05:57:19 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:19:12.513 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:19:12.773 malloc_lvol_verify 00:19:12.773 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:19:13.032 399bc60d-920d-47db-9ac6-a12e77f49264 00:19:13.032 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:19:13.292 d64a3da7-f21b-4327-88ad-a25ea5643ecc 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:19:13.292 /dev/nbd0 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:19:13.292 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:19:13.552 mke2fs 1.47.0 (5-Feb-2023) 00:19:13.552 Discarding device blocks: 0/4096 done 00:19:13.552 Creating filesystem with 4096 1k blocks and 1024 inodes 00:19:13.552 00:19:13.552 Allocating group tables: 0/1 done 00:19:13.552 Writing inode tables: 0/1 done 00:19:13.552 Creating journal (1024 blocks): done 00:19:13.552 Writing superblocks and filesystem accounting information: 0/1 done 00:19:13.552 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:13.552 05:57:20 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 89037 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 89037 ']' 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 89037 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89037 00:19:13.552 killing process with pid 89037 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89037' 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 89037 00:19:13.552 05:57:21 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 89037 00:19:14.934 ************************************ 00:19:14.934 END TEST bdev_nbd 00:19:14.934 ************************************ 00:19:14.934 05:57:22 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:19:14.934 00:19:14.934 real 0m5.608s 00:19:14.934 user 0m7.402s 00:19:14.934 sys 0m1.429s 00:19:14.934 05:57:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.934 05:57:22 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:19:15.195 05:57:22 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:19:15.195 05:57:22 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:19:15.195 05:57:22 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:19:15.195 05:57:22 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:19:15.195 05:57:22 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:15.195 05:57:22 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.195 05:57:22 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:15.195 ************************************ 00:19:15.195 START TEST bdev_fio 00:19:15.195 ************************************ 00:19:15.195 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:15.195 ************************************ 00:19:15.195 START TEST bdev_fio_rw_verify 00:19:15.195 ************************************ 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:19:15.195 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:19:15.456 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:19:15.456 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:19:15.456 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:19:15.456 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:15.456 05:57:22 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:19:15.456 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:19:15.456 fio-3.35 00:19:15.456 Starting 1 thread 00:19:27.723 00:19:27.723 job_raid5f: (groupid=0, jobs=1): err= 0: pid=89199: Thu Dec 12 05:57:33 2024 00:19:27.723 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:19:27.723 slat (usec): min=17, max=101, avg=19.99, stdev= 2.23 00:19:27.723 clat (usec): min=11, max=439, avg=133.40, stdev=47.55 00:19:27.723 lat (usec): min=30, max=459, avg=153.39, stdev=47.78 00:19:27.723 clat percentiles (usec): 00:19:27.723 | 50.000th=[ 137], 99.000th=[ 223], 99.900th=[ 245], 99.990th=[ 277], 00:19:27.723 | 99.999th=[ 420] 00:19:27.723 write: IOPS=12.7k, BW=49.5MiB/s (51.9MB/s)(489MiB/9871msec); 0 zone resets 00:19:27.723 slat (usec): min=7, max=303, avg=16.32, stdev= 3.76 00:19:27.723 clat (usec): min=58, max=1372, avg=303.06, stdev=41.01 00:19:27.723 lat (usec): min=73, max=1599, avg=319.38, stdev=41.95 00:19:27.723 clat percentiles (usec): 00:19:27.723 | 50.000th=[ 306], 99.000th=[ 379], 99.900th=[ 578], 99.990th=[ 1123], 00:19:27.723 | 99.999th=[ 1319] 00:19:27.723 bw ( KiB/s): min=47832, max=53536, per=98.86%, avg=50145.26, stdev=1370.95, samples=19 00:19:27.723 iops : min=11958, max=13384, avg=12536.32, stdev=342.74, samples=19 00:19:27.723 lat (usec) : 20=0.01%, 50=0.01%, 100=14.70%, 250=39.55%, 500=45.67% 00:19:27.723 lat (usec) : 750=0.05%, 1000=0.02% 00:19:27.723 lat (msec) : 2=0.01% 00:19:27.723 cpu : usr=98.92%, sys=0.41%, ctx=27, majf=0, minf=9931 00:19:27.723 IO depths : 1=7.7%, 2=19.9%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:27.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.723 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:27.723 issued rwts: total=121355,125169,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:27.723 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:27.723 00:19:27.723 Run status group 0 (all jobs): 00:19:27.723 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:19:27.723 WRITE: bw=49.5MiB/s (51.9MB/s), 49.5MiB/s-49.5MiB/s (51.9MB/s-51.9MB/s), io=489MiB (513MB), run=9871-9871msec 00:19:27.982 ----------------------------------------------------- 00:19:27.982 Suppressions used: 00:19:27.982 count bytes template 00:19:27.982 1 7 /usr/src/fio/parse.c 00:19:27.982 63 6048 /usr/src/fio/iolog.c 00:19:27.982 1 8 libtcmalloc_minimal.so 00:19:27.982 1 904 libcrypto.so 00:19:27.982 ----------------------------------------------------- 00:19:27.982 00:19:27.982 00:19:27.982 real 0m12.721s 00:19:27.982 user 0m12.945s 00:19:27.982 sys 0m0.751s 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:19:27.982 ************************************ 00:19:27.982 END TEST bdev_fio_rw_verify 00:19:27.982 ************************************ 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:19:27.982 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "50ed711e-7a73-4733-af73-e9cc999d8022"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "50ed711e-7a73-4733-af73-e9cc999d8022",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "50ed711e-7a73-4733-af73-e9cc999d8022",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "10b36f16-d8d1-41db-8ff0-6824c12e95e6",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "6b942dc3-01de-4242-a54e-f7b89dcce187",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "bb511c4e-663f-4148-8b3d-8856879c1fd5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:27.983 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:19:28.243 /home/vagrant/spdk_repo/spdk 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:19:28.243 00:19:28.243 real 0m13.023s 00:19:28.243 user 0m13.082s 00:19:28.243 sys 0m0.889s 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.243 ************************************ 00:19:28.243 END TEST bdev_fio 00:19:28.243 ************************************ 00:19:28.243 05:57:35 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 05:57:35 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:28.243 05:57:35 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:28.243 05:57:35 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:28.243 05:57:35 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.243 05:57:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:28.243 ************************************ 00:19:28.243 START TEST bdev_verify 00:19:28.243 ************************************ 00:19:28.243 05:57:35 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:19:28.243 [2024-12-12 05:57:35.694265] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:28.243 [2024-12-12 05:57:35.694376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89292 ] 00:19:28.502 [2024-12-12 05:57:35.868414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:28.502 [2024-12-12 05:57:35.979669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.502 [2024-12-12 05:57:35.979694] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.071 Running I/O for 5 seconds... 00:19:31.394 10384.00 IOPS, 40.56 MiB/s [2024-12-12T05:57:39.855Z] 10446.50 IOPS, 40.81 MiB/s [2024-12-12T05:57:40.795Z] 10429.67 IOPS, 40.74 MiB/s [2024-12-12T05:57:41.733Z] 10393.00 IOPS, 40.60 MiB/s [2024-12-12T05:57:41.733Z] 10396.60 IOPS, 40.61 MiB/s 00:19:34.211 Latency(us) 00:19:34.211 [2024-12-12T05:57:41.734Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.212 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:34.212 Verification LBA range: start 0x0 length 0x2000 00:19:34.212 raid5f : 5.02 4169.41 16.29 0.00 0.00 46163.28 162.77 32739.38 00:19:34.212 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:34.212 Verification LBA range: start 0x2000 length 0x2000 00:19:34.212 raid5f : 5.01 6200.84 24.22 0.00 0.00 31139.56 275.45 22436.78 00:19:34.212 [2024-12-12T05:57:41.734Z] =================================================================================================================== 00:19:34.212 [2024-12-12T05:57:41.734Z] Total : 10370.25 40.51 0.00 0.00 37182.45 162.77 32739.38 00:19:35.593 00:19:35.593 real 0m7.219s 00:19:35.593 user 0m13.351s 00:19:35.593 sys 0m0.281s 00:19:35.593 05:57:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:35.593 05:57:42 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:35.593 ************************************ 00:19:35.593 END TEST bdev_verify 00:19:35.593 ************************************ 00:19:35.593 05:57:42 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:35.593 05:57:42 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:19:35.593 05:57:42 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:35.593 05:57:42 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:35.593 ************************************ 00:19:35.593 START TEST bdev_verify_big_io 00:19:35.593 ************************************ 00:19:35.593 05:57:42 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:35.593 [2024-12-12 05:57:42.987851] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:35.593 [2024-12-12 05:57:42.988014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89343 ] 00:19:35.854 [2024-12-12 05:57:43.160342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:35.854 [2024-12-12 05:57:43.272334] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.854 [2024-12-12 05:57:43.272363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.423 Running I/O for 5 seconds... 00:19:38.741 633.00 IOPS, 39.56 MiB/s [2024-12-12T05:57:46.833Z] 760.00 IOPS, 47.50 MiB/s [2024-12-12T05:57:48.214Z] 761.33 IOPS, 47.58 MiB/s [2024-12-12T05:57:49.154Z] 760.75 IOPS, 47.55 MiB/s [2024-12-12T05:57:49.154Z] 761.60 IOPS, 47.60 MiB/s 00:19:41.632 Latency(us) 00:19:41.632 [2024-12-12T05:57:49.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.632 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:41.632 Verification LBA range: start 0x0 length 0x200 00:19:41.632 raid5f : 5.20 341.93 21.37 0.00 0.00 9300872.47 194.96 399283.09 00:19:41.632 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:41.632 Verification LBA range: start 0x200 length 0x200 00:19:41.632 raid5f : 5.26 434.85 27.18 0.00 0.00 7399532.80 173.50 320525.41 00:19:41.632 [2024-12-12T05:57:49.154Z] =================================================================================================================== 00:19:41.632 [2024-12-12T05:57:49.154Z] Total : 776.78 48.55 0.00 0.00 8231368.90 173.50 399283.09 00:19:43.014 00:19:43.014 real 0m7.438s 00:19:43.014 user 0m13.792s 00:19:43.014 sys 0m0.276s 00:19:43.014 05:57:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:43.014 05:57:50 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:43.014 ************************************ 00:19:43.014 END TEST bdev_verify_big_io 00:19:43.014 ************************************ 00:19:43.014 05:57:50 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.014 05:57:50 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:43.014 05:57:50 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:43.014 05:57:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:43.014 ************************************ 00:19:43.014 START TEST bdev_write_zeroes 00:19:43.014 ************************************ 00:19:43.014 05:57:50 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:43.014 [2024-12-12 05:57:50.496821] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:43.014 [2024-12-12 05:57:50.496946] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89399 ] 00:19:43.274 [2024-12-12 05:57:50.668676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.274 [2024-12-12 05:57:50.771690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.844 Running I/O for 1 seconds... 00:19:44.784 30063.00 IOPS, 117.43 MiB/s 00:19:44.784 Latency(us) 00:19:44.784 [2024-12-12T05:57:52.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:44.785 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:44.785 raid5f : 1.01 30042.02 117.35 0.00 0.00 4249.62 1309.29 5895.38 00:19:44.785 [2024-12-12T05:57:52.307Z] =================================================================================================================== 00:19:44.785 [2024-12-12T05:57:52.307Z] Total : 30042.02 117.35 0.00 0.00 4249.62 1309.29 5895.38 00:19:46.167 00:19:46.167 real 0m3.186s 00:19:46.167 user 0m2.813s 00:19:46.167 sys 0m0.248s 00:19:46.167 05:57:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.167 05:57:53 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:46.167 ************************************ 00:19:46.167 END TEST bdev_write_zeroes 00:19:46.167 ************************************ 00:19:46.167 05:57:53 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.167 05:57:53 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.167 05:57:53 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.167 05:57:53 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.167 ************************************ 00:19:46.167 START TEST bdev_json_nonenclosed 00:19:46.167 ************************************ 00:19:46.167 05:57:53 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.427 [2024-12-12 05:57:53.743883] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:46.427 [2024-12-12 05:57:53.744026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89434 ] 00:19:46.427 [2024-12-12 05:57:53.914807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.687 [2024-12-12 05:57:54.016619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.687 [2024-12-12 05:57:54.016723] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:46.687 [2024-12-12 05:57:54.016748] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:46.687 [2024-12-12 05:57:54.016757] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:46.947 00:19:46.947 real 0m0.591s 00:19:46.947 user 0m0.359s 00:19:46.947 sys 0m0.127s 00:19:46.947 05:57:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.947 05:57:54 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 ************************************ 00:19:46.947 END TEST bdev_json_nonenclosed 00:19:46.947 ************************************ 00:19:46.947 05:57:54 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.947 05:57:54 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:19:46.947 05:57:54 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.947 05:57:54 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:46.947 ************************************ 00:19:46.947 START TEST bdev_json_nonarray 00:19:46.947 ************************************ 00:19:46.947 05:57:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:46.947 [2024-12-12 05:57:54.415531] Starting SPDK v25.01-pre git sha1 d58eef2a2 / DPDK 24.03.0 initialization... 00:19:46.947 [2024-12-12 05:57:54.415657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89454 ] 00:19:47.207 [2024-12-12 05:57:54.590530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:47.207 [2024-12-12 05:57:54.704341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.207 [2024-12-12 05:57:54.704453] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:47.207 [2024-12-12 05:57:54.704469] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:47.207 [2024-12-12 05:57:54.704486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:47.467 00:19:47.467 real 0m0.614s 00:19:47.467 user 0m0.373s 00:19:47.467 sys 0m0.137s 00:19:47.467 05:57:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.467 05:57:54 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:47.467 ************************************ 00:19:47.467 END TEST bdev_json_nonarray 00:19:47.467 ************************************ 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:19:47.725 05:57:55 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:19:47.725 00:19:47.725 real 0m48.565s 00:19:47.725 user 1m5.031s 00:19:47.725 sys 0m5.513s 00:19:47.725 05:57:55 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:47.725 05:57:55 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:47.725 ************************************ 00:19:47.725 END TEST blockdev_raid5f 00:19:47.725 ************************************ 00:19:47.725 05:57:55 -- spdk/autotest.sh@194 -- # uname -s 00:19:47.725 05:57:55 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:47.725 05:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:47.725 05:57:55 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:47.726 05:57:55 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:47.726 05:57:55 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:47.726 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.726 05:57:55 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:19:47.726 05:57:55 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:19:47.726 05:57:55 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:19:47.726 05:57:55 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:19:47.726 05:57:55 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:19:47.726 05:57:55 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:19:47.726 05:57:55 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:19:47.726 05:57:55 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:47.726 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:19:47.726 05:57:55 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:19:47.726 05:57:55 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:19:47.726 05:57:55 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:19:47.726 05:57:55 -- common/autotest_common.sh@10 -- # set +x 00:19:50.265 INFO: APP EXITING 00:19:50.265 INFO: killing all VMs 00:19:50.265 INFO: killing vhost app 00:19:50.265 INFO: EXIT DONE 00:19:50.550 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:50.550 Waiting for block devices as requested 00:19:50.550 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:19:50.810 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:19:51.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:51.751 Cleaning 00:19:51.751 Removing: /var/run/dpdk/spdk0/config 00:19:51.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:19:51.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:19:51.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:19:51.751 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:19:51.751 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:19:51.751 Removing: /var/run/dpdk/spdk0/hugepage_info 00:19:51.751 Removing: /dev/shm/spdk_tgt_trace.pid57987 00:19:51.751 Removing: /var/run/dpdk/spdk0 00:19:51.751 Removing: /var/run/dpdk/spdk_pid57758 00:19:51.751 Removing: /var/run/dpdk/spdk_pid57987 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58216 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58326 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58371 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58501 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58527 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58727 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58844 00:19:51.751 Removing: /var/run/dpdk/spdk_pid58946 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59069 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59177 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59216 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59253 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59329 00:19:51.751 Removing: /var/run/dpdk/spdk_pid59446 00:19:52.011 Removing: /var/run/dpdk/spdk_pid59888 00:19:52.011 Removing: /var/run/dpdk/spdk_pid59963 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60037 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60053 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60194 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60217 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60363 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60379 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60449 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60468 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60532 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60556 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60751 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60782 00:19:52.011 Removing: /var/run/dpdk/spdk_pid60871 00:19:52.011 Removing: /var/run/dpdk/spdk_pid62204 00:19:52.011 Removing: /var/run/dpdk/spdk_pid62410 00:19:52.011 Removing: /var/run/dpdk/spdk_pid62556 00:19:52.011 Removing: /var/run/dpdk/spdk_pid63194 00:19:52.011 Removing: /var/run/dpdk/spdk_pid63405 00:19:52.011 Removing: /var/run/dpdk/spdk_pid63545 00:19:52.011 Removing: /var/run/dpdk/spdk_pid64183 00:19:52.011 Removing: /var/run/dpdk/spdk_pid64513 00:19:52.011 Removing: /var/run/dpdk/spdk_pid64652 00:19:52.011 Removing: /var/run/dpdk/spdk_pid66027 00:19:52.011 Removing: /var/run/dpdk/spdk_pid66285 00:19:52.011 Removing: /var/run/dpdk/spdk_pid66426 00:19:52.011 Removing: /var/run/dpdk/spdk_pid67806 00:19:52.011 Removing: /var/run/dpdk/spdk_pid68059 00:19:52.011 Removing: /var/run/dpdk/spdk_pid68199 00:19:52.011 Removing: /var/run/dpdk/spdk_pid69574 00:19:52.011 Removing: /var/run/dpdk/spdk_pid70014 00:19:52.011 Removing: /var/run/dpdk/spdk_pid70160 00:19:52.011 Removing: /var/run/dpdk/spdk_pid71637 00:19:52.011 Removing: /var/run/dpdk/spdk_pid71896 00:19:52.011 Removing: /var/run/dpdk/spdk_pid72042 00:19:52.011 Removing: /var/run/dpdk/spdk_pid73520 00:19:52.011 Removing: /var/run/dpdk/spdk_pid73785 00:19:52.011 Removing: /var/run/dpdk/spdk_pid73939 00:19:52.011 Removing: /var/run/dpdk/spdk_pid75422 00:19:52.011 Removing: /var/run/dpdk/spdk_pid75909 00:19:52.011 Removing: /var/run/dpdk/spdk_pid76055 00:19:52.011 Removing: /var/run/dpdk/spdk_pid76199 00:19:52.011 Removing: /var/run/dpdk/spdk_pid76616 00:19:52.011 Removing: /var/run/dpdk/spdk_pid77347 00:19:52.011 Removing: /var/run/dpdk/spdk_pid77713 00:19:52.011 Removing: /var/run/dpdk/spdk_pid78417 00:19:52.011 Removing: /var/run/dpdk/spdk_pid78756 00:19:52.011 Removing: /var/run/dpdk/spdk_pid79360 00:19:52.011 Removing: /var/run/dpdk/spdk_pid79690 00:19:52.011 Removing: /var/run/dpdk/spdk_pid81403 00:19:52.011 Removing: /var/run/dpdk/spdk_pid81798 00:19:52.011 Removing: /var/run/dpdk/spdk_pid82144 00:19:52.011 Removing: /var/run/dpdk/spdk_pid83957 00:19:52.011 Removing: /var/run/dpdk/spdk_pid84394 00:19:52.272 Removing: /var/run/dpdk/spdk_pid84790 00:19:52.272 Removing: /var/run/dpdk/spdk_pid85663 00:19:52.272 Removing: /var/run/dpdk/spdk_pid85954 00:19:52.272 Removing: /var/run/dpdk/spdk_pid86751 00:19:52.272 Removing: /var/run/dpdk/spdk_pid87039 00:19:52.272 Removing: /var/run/dpdk/spdk_pid87834 00:19:52.272 Removing: /var/run/dpdk/spdk_pid88121 00:19:52.272 Removing: /var/run/dpdk/spdk_pid88697 00:19:52.272 Removing: /var/run/dpdk/spdk_pid88917 00:19:52.272 Removing: /var/run/dpdk/spdk_pid88960 00:19:52.272 Removing: /var/run/dpdk/spdk_pid88990 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89190 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89292 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89343 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89399 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89434 00:19:52.272 Removing: /var/run/dpdk/spdk_pid89454 00:19:52.272 Clean 00:19:52.272 05:57:59 -- common/autotest_common.sh@1453 -- # return 0 00:19:52.272 05:57:59 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:19:52.272 05:57:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.272 05:57:59 -- common/autotest_common.sh@10 -- # set +x 00:19:52.272 05:57:59 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:19:52.272 05:57:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:52.272 05:57:59 -- common/autotest_common.sh@10 -- # set +x 00:19:52.532 05:57:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:52.532 05:57:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:52.532 05:57:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:19:52.532 05:57:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:19:52.532 05:57:59 -- spdk/autotest.sh@398 -- # hostname 00:19:52.532 05:57:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:19:52.532 geninfo: WARNING: invalid characters removed from testname! 00:20:19.163 05:58:26 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:21.703 05:58:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:23.613 05:58:30 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:25.523 05:58:32 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:27.433 05:58:34 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:29.342 05:58:36 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:20:31.883 05:58:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:20:31.883 05:58:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:20:31.883 05:58:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:20:31.883 05:58:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:20:31.883 05:58:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:20:31.883 05:58:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:31.883 + [[ -n 5413 ]] 00:20:31.883 + sudo kill 5413 00:20:31.893 [Pipeline] } 00:20:31.909 [Pipeline] // timeout 00:20:31.914 [Pipeline] } 00:20:31.927 [Pipeline] // stage 00:20:31.932 [Pipeline] } 00:20:31.946 [Pipeline] // catchError 00:20:31.954 [Pipeline] stage 00:20:31.956 [Pipeline] { (Stop VM) 00:20:31.968 [Pipeline] sh 00:20:32.251 + vagrant halt 00:20:34.791 ==> default: Halting domain... 00:20:42.936 [Pipeline] sh 00:20:43.219 + vagrant destroy -f 00:20:45.790 ==> default: Removing domain... 00:20:45.820 [Pipeline] sh 00:20:46.106 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:20:46.116 [Pipeline] } 00:20:46.131 [Pipeline] // stage 00:20:46.137 [Pipeline] } 00:20:46.151 [Pipeline] // dir 00:20:46.156 [Pipeline] } 00:20:46.170 [Pipeline] // wrap 00:20:46.177 [Pipeline] } 00:20:46.189 [Pipeline] // catchError 00:20:46.198 [Pipeline] stage 00:20:46.201 [Pipeline] { (Epilogue) 00:20:46.213 [Pipeline] sh 00:20:46.498 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:50.709 [Pipeline] catchError 00:20:50.710 [Pipeline] { 00:20:50.722 [Pipeline] sh 00:20:51.006 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:51.006 Artifacts sizes are good 00:20:51.016 [Pipeline] } 00:20:51.030 [Pipeline] // catchError 00:20:51.041 [Pipeline] archiveArtifacts 00:20:51.048 Archiving artifacts 00:20:51.146 [Pipeline] cleanWs 00:20:51.158 [WS-CLEANUP] Deleting project workspace... 00:20:51.158 [WS-CLEANUP] Deferred wipeout is used... 00:20:51.166 [WS-CLEANUP] done 00:20:51.168 [Pipeline] } 00:20:51.183 [Pipeline] // stage 00:20:51.189 [Pipeline] } 00:20:51.203 [Pipeline] // node 00:20:51.209 [Pipeline] End of Pipeline 00:20:51.256 Finished: SUCCESS